100X800 Banner (1)

Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory

Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory
by Admin on 08-20-2025 at 8:00 am

Toggle MRAM Everspin SemiWiki

Everspin’s recent fireside chat, moderated by Robert Blum of Lithium Partners, offered a crisp look at how the company is carving out a durable niche in non-volatile memory. CEO Sanjeev Agrawal’s core message was simple: MRAM’s mix of speed, persistence, and robustness lets it masquerade as multiple memory classes, data-logging, configuration, and even data-center memory—while solving headaches that plague legacy technologies.

At the heart of Everspin’s pitch is performance under pressure. MRAM reads and writes in nanoseconds, versus microsecond-class writes for NOR flash. In factory automation, that difference is existential: robots constantly report state back to PLCs, and a sudden power loss with flash can scrap in-process work. With MRAM, the system snapshots in nanoseconds, so when power returns, machines resume exactly where they left off. The same attributes—instant writes, deterministic behavior, and automotive-grade temperature tolerance (−40°C to 125°C) translate into reliability for EV battery management systems, medical devices, and casino gaming machines (which must log every action multiple times).

Everspin organizes its portfolio into three families. “Persist” targets harsh environments and mission-critical data capture. “UNESYS” combines data and code for systems that need high-speed configuration—think FPGAs that today rely on NOR flash and suffer painfully slow updates. Agrawal’s thought experiment is vivid: a large configuration image that takes minutes on NOR could be near-instant on MRAM reboots and rollbacks included. Finally, “Agilus” looks forward to AI and edge-inference use cases, where MRAM’s non-volatility, SRAM-like speed, and low leakage enable execute-in-place without the refresh penalties and backup complexities of SRAM.

The business model is intentionally diversified: product sales; licensing and royalties (including programs where primes license Everspin IP, qualify processes, and pay per-unit royalties); and U.S. government programs that need radiation-tolerant, mission-critical memory. On the product side, Everspin ships both toggle (field-switched) MRAM and perpendicular spin-transfer torque (STT-MRAM), the latter already designed into data-center and configuration solutions. The customer base spans 2,000+ companies—from Siemens and Schneider to IBM and Juniper served largely through global distributors (about 90% of sales), with Everspin engaging directly early in the design-in cycle.

Strategically, the company sees a major opening as NOR flash stalls around the 40 nm node and tops out at modest densities. Everspin is shipping MRAM that “looks like NOR” at 64–128 Mb today, with a roadmap stepping from 256 Mb up to 2 Gb, aiming to land first high-density parts in the 2026 timeframe. Management sizes this replacement and adjacent opportunity at roughly $4.3B by 2029, and claims few credible MRAM rivals at these densities. Competitively, a South Korea–based supplier fabbed by Samsung offers a single native 16 Mb die (binned down to 4 Mb) with a longer-dated jump to 1 Gb, while Avalanche focuses on aerospace/defense with plans for 1 Gb STT-MRAM; Everspin’s pitch is breadth of densities, standard packages (BGA, TSOP), and an active roadmap.

Beyond industrial and config memory, low-earth-orbit satellites are a fast-emerging tailwind. With tens of thousands of spacecraft projected, radiation-tolerant, reliable memory is paramount; Everspin cites work with Blue Origin and Astro Digital and emphasizes MRAM’s inherent radiation resilience when paired with hardened front-end logic from primes. Supply-chain-wise, the company spreads manufacturing across regions: wafers sourced from TSMC (Washington State and Taiwan Fab 6) are finished in Chandler, Arizona; packaging and final test occur in Taiwan. For STT-MRAM, GlobalFoundries handles front end (Germany) and MRAM back end (Singapore), with final assembly/test again in Taiwan, diversifying tariff and logistics exposure.

Financially, Everspin reported another profitable quarter, slightly ahead of guidance and consensus, citing early recovery from customers’ post-pandemic inventory overhang and growing design-win momentum for its XSPI family (with revenue contribution expected in 2025). The balance sheet remains clean—debt-free with roughly $45M in cash—and the go-to-market team has been bolstered with a dedicated VP of Business Development and a new VP of Sales recruited from Intel’s Altera business.

Agrawal’s long-view is unabashed: MRAM is “the future of memory.” Whether replacing NOR in configuration, displacing SRAM at the edge for inference, or anchoring radiation-tolerant systems in space, Everspin’s thesis is that one versatile, non-volatile, fast, and power-efficient technology can simplify architectures while cutting energy use—an advantage that grows as AI workloads proliferate. If execution matches the roadmap, the company stands to be the domestic standard-bearer for MRAM across a widening set of markets.

See the full transcript here.

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


A Principled AI Path to Spec-Driven Verification

A Principled AI Path to Spec-Driven Verification
by Bernard Murphy on 08-20-2025 at 6:00 am

NLP versus LLM choice min

I have seen a flood of verification announcements around directly reading product specs through LLM methods, and from there directly generating test plans and test suite content to drive verification. Conceptually automating this step makes a lot of sense. Carefully interpreting such specs even today is a largely manual task, requiring painstaking attention to detail in which even experienced and diligent readers can miss or misinterpret critical requirements. As always in any exciting concept, the devil is in the details. I talked with Dave Kelf (CEO at Breker) and Adnan Hamid (President and CTO at Breker) to better understand those details and the Breker approach.

Created with Image Creator from Bing

The devil in the specs

We like to idealize specs as one or more finalized documents representing the ultimate description of what is required from the design. Of course that’s not reality, maybe never was but certainly not today where specs change rapidly, even during design, adapting to dynamically evolving market demands and implementation constraints.

A more realistic view according to Adnan is a base spec supplemented by a progression of engineering change notices (ECNs) each reflecting incremental updates to the prior set of ECNs on top of the base spec. Compound that with multiple product variants covering different feature extensions. As a writer I know that even if you can merge a set of changes on top of a base doc, the merge may introduce new problems in readability, even in critical details. Yet in review and updates, reviewers naturally incline to edits considered in a local context rather than trying to evaluate each edit in the context of the whole doc and spec variants on each update.

Worse yet in this fast-paced spec update cycle, clarity of explanation often takes a back seat to urgent partner demands, especially when multiple writers contribute to the spec. In the base doc and especially in ECNs developers/reviewer lean heavily on assumptions that a “reasonable” reader will not need in-depth explanations for basic details. “Basic” in this context is a very subjective judgement, often assuming more in-depth domain expertise as ECNs progress. In the domain we are considering this is specialized know-how. How likely is it that a trained LLM will have captured enough of that know-how? Adnan likes to point to the RISC-V specs as an example illustrating all these challenges, with a rich superstructure of extensions and ECNs which he finds can in places imply multiple possible interpretations that may not have been intended by the spec writers.

Further on the context point, expert DV engineers read the same specs but they also talk to design engineers, architects, apps engineers, marketing, to gather additional input which might affect expected use cases and other constraints. And they tap their own specialized expertise developed over years of building/verifying similar designs. None of this is likely to be captured in the spec or in the LLM model.

Automated spec analysis can be enormously valuable in consolidating ECNs with the base spec to generate a detailed and actionable test plan, but that must then be reviewed by a DV expert and corrected through additional prompts. The process isn’t hands-free but nevertheless could be a big advance in DV productivity.

The Breker strategy

Interestingly Breker test synthesis is built on AI methods created long before we had heard of LLMs or even neural nets. Expert systems and rule-based planning software were popular technologies found in early AI platforms from IBM and others and became the foundation of Breker test synthesis. Using these methods Breker software has already been very effective and in production for many years, auto-generating very sophisticated tests from (human-interpreted) high level system descriptions. Importantly, while the underlying test synthesis methods are AI-based they are much more deterministic than modern LLMs, not prone to hallucination.

Now for the principled part of the strategy. It is widely agreed that all effective applications of AI in EDA must build on a principled base to meet the high accuracy and reliability demands of electronic design. For example production EDA software today builds on principled simulation, principled place and route, principled multiphysics analysis. The same must surely apply to functional test plan definition and test generation for which quality absolutely cannot be compromised. Given this philosophy, Breker are building spec interpretation on top of their test synthesis platform along two branches.

The first branch is their own development around natural language processing (NLP), again not LLM-based. NLP also pre-dates LLMs, using statistical methods rather than the attention-based LLM machinery. Dave and Adnan did not share details, but I feel I can speculate a little without getting too far off track. NLP-based systems are even now popular in domain specific areas. I would guess that a constrained domain, a specialist vocabulary, and a presumed high level of expertise in the source content can greatly simplify learning versus requirements for general purpose LLMs. These characteristics could be an excellent fit in interpreting design behavior specs.

Dave tells me they already have such a system in development which will be able to read a spec and convert it into a set of Breker graphs. From that they can directly run test plan generation and synthesis, with support for debug, coverage, portability to emulation, all the capabilities that the principled base already provides. They can also build on the library of test support functions already in production, for coherency checking, RISC-V compliance and SoC readiness, Arm SocReady, power management and security root of trust checks.  A rich set of capabilities which don’t need to be re-discovered in detail in a general purpose LLM model.

The second branch is partnership with some of the emerging agentic ventures. In general, what I have seen among such ventures is either strong agentic expertise but early in building verification expertise, or strong verification expertise but early in building agentic expertise. A partnership between a strong agentic venture and a strong verification company, seems like a very good idea. Dave and Adnan believe that what they already have in their Breker toolset, combined with what they are learning in their internal development can provide a bridge between these platforms, combining full-blown LLM-based spec interpretation, and the advantages of principled test synthesis.

My takeaway

This is a direction worth following. All the excitement and startup focus in verification is naturally on LLMs but today accuracy levels are not yet where they need to be to ensure production quality test plans and test generation without expert DV guidance. We need to see increased accuracy and we need to gain experience in the level of expert effort required to bring auto-generated tests to signoff quality.

Breker is investing in LLM-based spec-driven verification, but like other established verification providers must meanwhile continue to support production quality tools and advance the capabilities of those tools. Their two-pronged approach to spec interpretation aims to have the best of both worlds – an effective solution for the near term through NLP interpretation and an LLM solution for the longer term.

You can learn more about the principled Breker test synthesis products HERE.


Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY

Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY
by Daniel Nenni on 08-19-2025 at 10:00 am

MIPI Framework Mixel

The white paper “Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY” details the latest developments in these two critical high-speed interface technologies, highlighting how they evolve to meet modern demands in camera and display systems across automotive, industrial, healthcare, and XR applications.

The evolution of MIPI D-PHY and MIPI C-PHY reflects the ongoing push toward higher performance and power-efficient data interfaces in camera and display systems. Originally developed for the mobile industry, both PHY types have significantly matured to support diverse applications in automotive, healthcare, industrial vision, and extended reality (XR). These advancements are essential to accommodate surging data rates driven by higher resolutions, expanded frame rates, and real-time image processing.

MIPI D-PHY, introduced in 2009, has incrementally increased its per-lane throughput from 2.5 Gbps to 11 Gbps over several specification versions. Key to supporting these higher rates are signal integrity enhancements such as transmitter de-emphasis and receiver Continuous Time Linear Equalization (CTLE), first introduced in v2.0. Version 3.5 added non-linear Decision Feedback Equalization (DFE) to further improve signal performance, especially in the 6–11 Gbps range. These techniques help mitigate channel losses across increasingly complex physical environments including PCB traces, packages, and connectors.

The reliability challenges that arose as silicon geometries shrank were tackled by introducing new signaling modes. The original 1.2V LVCMOS signaling used for low-power control became problematic in modern nodes with lower core voltages. MIPI D-PHY responded by offering LVLP mode to lower the voltage swing to 0.95V and ultimately developed the Alternate Low Power (ALP) mode. ALP mode discards the LP transmitter/receiver entirely, reusing the high-speed circuits for low power signaling. This not only improves leakage characteristics and reduces IO loading but also enables the PHY to operate over longer channels, up to 4 meters.

The ALP signaling introduces the ALP-00 state, a collapsed differential mode where both wires are grounded, minimizing power during idle periods. Wake pulses and high-speed bursts are coordinated using embedded control signals, enhancing synchronization. Notably, ALP also supports fast lane turnaround, which significantly reduces latency in bidirectional interfaces compared to legacy LP-mode lane switching. Combined with spread spectrum clocking, first introduced in v2.0 to mitigate EMI, MIPI D-PHY’s power and emissions profile is increasingly well-suited for automotive and industrial-grade deployments.

In a major architectural shift, MIPI D-PHY v3.5 introduced Embedded Clock Mode (ECM). In ECM, clock information is no longer carried on a dedicated lane but embedded in the data stream using 128b/132b encoding with clock and data recovery (CDR). This allows the clock lane to be repurposed as a fifth data lane, increasing throughput by 25% in common configurations. ECM also reduces EMI by eliminating the always-on toggling clock line and permits skew-insensitive timing between data lanes. However, the trade-off is reduced backward compatibility: ECM-only PHYs cannot interoperate with older Forwarded Clock Mode (FCM)-only devices.

MIPI C-PHY, launched in 2014, uses a 3-wire lane and a ternary signaling method to achieve efficient data encoding. The original 6-wirestate configuration encoded 16 bits in 7 symbols for an encoding efficiency of 2.28x. As symbol rates increased from 2.5 to 6 Gsps, data rates rose to 13.7 Gbps per lane. Equalization support was expanded in versions 1.2 and 2.0 through advanced TxEQ, CTLE, and various training sequences. Low power features were also introduced, including LVHS, LVLP, and ALP modes, often mirroring MIPI D-PHY enhancements while adapting them to MIPI C-PHY’s unique signaling format.

The landmark change came with MIPI C-PHY v3.0 and the 18-Wirestate mode. This innovation retains the same 3-wire lane interface but increases encoding efficiency to 3.55x by introducing 18 distinct differential states across wire pairs. With this, the PHY can achieve up to 24.84 Gbps per lane on short channels. New encoding schemes and state transitions were developed, with each symbol defined by a 5-bit code representing polarity, rotation, and flip attributes. The additional signaling levels require multi-level slicers in the receiver and increased TX power but enable significantly greater throughput.

The 18-Wirestate system also introduces a more sophisticated lane mapping and control mechanism. By embedding turnaround codes into the last transmitted symbol burst, MIPI C-PHY accelerates lane reversal, improving duplex performance. Furthermore, signal integrity is preserved through careful voltage slicing and receiver sensitivity enhancements, ensuring reliability despite reduced signal-to-noise ratio due to the multi-level signaling.

Together, the continued evolution of MIPI D-PHY and MIPI C-PHY demonstrates the MIPI Alliance’s focus on scalable, forward-compatible solutions that can bridge mobile, automotive, and emerging computing environments.

You can read the full whitepaper here.

Also Read:

Mixel at the 2025 Design Automation Conference #62DAC

2025 Outlook with Justin Endo of Mixel

MIPI solutions for driving dual-display foldable devices


448G: Ready or not, here it comes!

448G: Ready or not, here it comes!
by Kalar Rajendiran on 08-19-2025 at 6:00 am

448G Host Channel Topologies Analyzed

The march toward higher-speed networking continues to be guided by the same core objectives as has always been : increase data rates, lower latency, improve reliability, reduce power consumption, and maintain or extend reach while controlling cost. For the next generation of high-speed interconnects, these requirements are embodied in the development of 448G high-performance SerDes. This will serve as the foundational electrical layer for scaling Ethernet beyond 1.6T, while also enabling other advanced interconnect architectures in AI, storage, and cloud-scale computing.

Meeting the core objectives has become progressively more complex with each generation. Today’s challenges include the diverging performance needs of AI and general networking applications, the distinct technical contributions of multiple standards organizations, and the growing sophistication required in electrical PHY implementations to meet 448G signaling requirements. The push for 448G is taking place in a context where the industry is both motivated to move quickly and tasked with solving a broader set of technical and deployment variables than ever before.

Synopsys provided a status update on this very topic, during a webinar it hosted recently. The webinar was delivered by Kent Lusted, Distinguished Architect and Priyank Shukla, Director of Product Management. This webinar is now accessible on-demand from here.

Industry Readiness and Market Drivers

The readiness for 448G Electrical PHY adoption is high, particularly in AI-driven scale-up and scale-out data center networks where back-end interconnect bottlenecks are already limiting system performance. Operators in these environments are pressing for faster rates with minimal latency and power, making 448G the logical next step. Hyperscalers and large enterprise operators are preparing infrastructure roadmaps that anticipate both short-reach copper-based implementations and longer-reach optical deployments.

From a supply chain perspective, SerDes technology at 224G per lane — the immediate precursor to 448G dual-lane architectures — has matured rapidly, providing a strong technical foundation. This maturity enables early prototyping of 448G PHYs, ensuring that when the standards are finalized, both silicon and system designs will be ready for deployment.

Emerging Standards and Collaborative Progress

Multiple standards bodies are actively shaping the path to 448G Electrical PHYs. The Optical Internetworking Forum (OIF) launched its CEI-448G framework project in July 2024, setting the stage for defining channel characteristics, modulation targets, and reach objectives. The IEEE P802.3dj task force is extending Ethernet standards to 1.6T and 200G-per-lane, with 448G PHYs as critical building blocks. The Ultra Ethernet Consortium (UEC) and UALink are aligning electrical interface specifications with AI-scale fabric requirements, while the Storage Networking Industry Association (SNIA) is hosting workshops to converge perspectives from AI, storage, and networking domains. The Open Compute Project (OCP) continues to drive deployment-oriented specifications, addressing form factors, integration models, and operational considerations for hyperscale adoption.

This collaborative landscape ensures that the final specifications will be both technically robust and deployment-ready, even as each organization brings a unique emphasis—be it Ethernet protocol compliance, electrical interoperability, electro-optical links, AI optimization, or system integration.

Advanced Modulation Schemes

Selecting the optimal modulation for 448G is one of the most significant technical decisions in PHY design. The primary candidates—PAM4, PAM6, CROSS-32, DSQ-32, PR-PAM4, BiDi-PAM4, SE-PAM4, and DMT—offer varying trade-offs between bandwidth efficiency, signal-to-noise ratio, complexity, and compatibility.

PAM4 remains attractive for its backward compatibility and alignment with optical implementations, though it demands higher circuit bandwidth. PAM6 offers some bandwidth relief but at the cost of more complex DSP and reduced noise margin. Two-dimensional constellations like CROSS-32 and DSQ-32 can improve detector margin for certain symbol patterns but require even more sophisticated detection algorithms. Other approaches, such as BiDi-PAM4 and SE-PAM4, aim to maintain I/O count while introducing new signal recovery challenges. The final modulation choice—or set of choices—must balance implementation feasibility with performance goals across AI and non-AI environments.

Next-Generation Channel Designs

Channel topology is a critical determinant of PHY performance at 448G. AI-oriented deployments tend to favor short, low-loss paths such as direct attach copper cables, near-package interconnects, or co-packaged optics (CPO), which simplify equalization and reduce latency. In contrast, front-panel optical modules in general networking often require longer PCB traces, multiple connectors, and possibly retimers, all of which increase signal degradation and complexity in the receiver.

Synopsys’ analysis of 12 channel topologies, in collaboration with Samtec, evaluated performance under realistic conditions including crosstalk, jitter, noise, and ratio level mismatch (RLM). These results inform the specification process by showing how PAM4 and PAM6 modulationschemes perform across a range of physical configurations, ensuring that reach and margin targets are grounded in actual channel behavior.

Design Complexity in 448G Electrical PHYs

Implementing a 448G PHY in SerDes form involves overcoming significant design challenges. At such high data rates, unit intervals are extremely short, requiring precise timing recovery, advanced feed-forward and decision-feedback equalization, and high-resolution ADC/DAC operation. Moving from PAM4 to PAM6, for instance, increases the number of symbol transitions from 16 to 36, the number of comparators in an unrolled DFE from 16 to 36, and the detector bit width from 2 bits to 3 bits, all of which demand higher precision and potentially higher power. These realities must be considered alongside the modulation choice, packaging strategy, and thermal constraints. No clear implementation advantage for either approach has been identified as of date.

Synopsys’ Role in Accelerating 448G Development

Synopsys is contributing to accelerated 448G development by engaging across all major forums—IEEE, OIF, UEC, SNIA, and OCP—ensuring that data, insights, and perspectives are shared early and often. The company’s early studies on channel topologies, combined with in-depth analysis of modulation trade-offs and DSP architecture complexity, give standards bodies a concrete basis for narrowing options quickly.

By promoting a phased approach that will help deliver the right PHYs and interfaces “just in time”, Synopsys helps ensure that high-priority deployments can begin without waiting for the complete standards suite to mature. This strategy, combined with technical modeling, real-world measurements, and active ecosystem engagement, positions Synopsys as a key enabler of the industry’s transition to 448G Electrical PHYs.

Also Read:

Synopsys Webinar – Enabling Multi-Die Design with Intel

cHBM for AI: Capabilities, Challenges, and Opportunities

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh


Should the US Government Invest in Intel?

Should the US Government Invest in Intel?
by Daniel Nenni on 08-18-2025 at 10:00 am

Trump Intel Foundry Hat Smiling SemiWiki

“Most companies don’t die because they are wrong; most die because they don’t commit themselves. They fritter away their valuable resources while attempting to make a decision. The greatest danger is standing still.” Andy Grove’s Only the Paranoid Survive, first published in 1996.

Looking back 20 years, we all know this is true for Intel and many other companies. Personally, I feel that the lack of competition was Intel’s undoing. AMD barely survived on the NOT INTEL market for a long time before finally presenting a competitive threat, which is where we are at today.

There are a lot of conspiracy theories floating around. The media seems to be in a death spiral over Intel which has caused a surge of interest on SemiWiki. We are happy to deploy additional cloud resources to keep up with the surge in traffic as it is a worthwhile cause. Intel is a national treasure that must be protected, absolutely.

It would be difficult to list all of the semiconductor manufacturing innovations that came out of Intel over the last 50 years as it is lengthy. Suffice to say Intel is THE most innovative company in the history of semiconductors. The latest innovation is Back Side Power Delivery (BSPD) which Intel has today at 18A. Other foundries will follow in the years to come but Intel is the leader in BSPD and I’m excited to see what is next.

Just to recap recent events:

Lip-Bu Tan took over as Intel CEO on March 18th of 2025. Four months later Lip-Bu has reshaped Intel to rid them of the analysis paralysis that has built up over the years.

On August 6th Senator Tom Cotton sent a letter to Intel Chairman Frank Yeary questioning Lip-Bu Tan’s investments in China:

Mr. Tan reportedly controls dozens of Chinese companies and has a stake in hundreds of Chinese advanced-manufacturing and chip firms. At least eight of these companies reportedly have ties to the Chinese People’s Liberation Army.”

This is not true of course. Senator Tom Cotton is up for re-election next year and has a reputation for being a China Hawk so this is US politics at its worst.

On August 7th Donald Trump (POTUS) responded by posting on Truth Social that “The CEO of Intel is highly CONFLICTED and must resign, immediately. There is no other solution to this problem.”

After the media immediately parroted calls for Lip-BU to resign, Lip-Bu Tan responded the with a letter to Intel employees clearly stating where his loyalties lie. This is one of the best written documents I have seen released from Intel. It is 100% factual with zero political or marketing spins.

On the following Monday (August 11th) Lip-Bu flew to Washington DC and met with POTUS to present the Intel value proposition for the United States of America as a US based leading edge semiconductor manufacturer. The meeting has been viewed as highly positive after POTUS followed up his first post about Lip-Bu Tan with, “His success and rise is an amazing story. Mr. Tan and my Cabinet members are going to spend time together, and bring suggestions to me during the next week.”

To me this supports my optimism in regards to Lip-Bu Tan’s leadership and making Intel relevant again.

The question we all have is how will this move forward? Will the USG invest in Intel and what would that look like? Even more importantly, was all of this planned in advance?

The media has said it was in fact planned in advance so political insiders could short Intel stock before the Tom Cotton letter came out to make a quick dollar. I highly doubt that since stock malfeasance is a felony but so is defamation so someone should be held accountable here.

Bottom line: The United States should invest in outcomes. The outcome being a competitive US based leading edge semiconductor manufacturer. The United States should be able to do this without defaming legendary CEOs and iconic companies like Intel but so be it. Hopefully this time around the ends will justify the highly politicized means.

Also Read:

Should Intel be Split in Half?

Making Intel Great Again!

Why I Think Intel 3.0 Will Succeed


PDF Solutions and the Value of Fearless Creativity

PDF Solutions and the Value of Fearless Creativity
by Mike Gianfagna on 08-18-2025 at 6:00 am

PDF Solutions and the Value of Fearless Creativity

PDF Solutions has been around for over 30 years. The company began with a focus on chip manufacturing and yield. Since the beginning, PDF Solutions anticipated many shifts in the semiconductor industry and has expanded its impact with enhanced data analytics and AI. Today, the company’s impact is felt from design to manufacturing, to the entire product lifecycle across the semiconductor supply chain as shown in the graphic above.

I did a bit of digging to understand the history and impact of this unique company. Along the way, I found a short but informative video (a link is coming). One of the speakers is a high-profile individual that talked about PDF Solutions and the value of fearless creativity. That quote did a great job to set the tone for what follows.

A Memorable Conversation

Dr. Christophe Begue

To begin my quest, I was able to spend some time talking with Christophe Begue, VP of corporate strategic marketing at PDF. Besides PDF, Dr. Begue has a long history of technology and business development at companies such as Oracle, IBM, and Philips Electronics. As Christophe began describing some of the innovations at PDF, he discussed the key role that the PDF data platform plays in the industry. In the cloud, PDF manages over 2.5PB of data, the equivalent of 3 million hours of video. Many customers deploy the PDF database on site which allows them to centrally manage all their massive manufacturing data.

He went on to explain that PDF has always been focused on anticipating the next challenge the semiconductor industry will face to be ahead of the problem. Current challenges include dealing with innovations in 3D, operating through a complex global supply chain, and figuring out how to leverage AI at all levels, from design to manufacturing. He described how PDF’s capabilities are at the intersection of these challenges with a common platform and database to improve collaboration and scale AI to drive operational efficiency.

Christophe went on to describe PDF’s three solution areas that all leverage a common data infrastructure powered by AI. The platform contains several significant capabilities that impact the entire semiconductor industry. This includes systems to help with characterization, including a unique contactless probing system developed by PDF. More on that in a moment.  There are also technologies to help optimize manufacturing and others to integrate and coordinate the worldwide supply chain. The diagram below summarizes how these systems leverage a common data infrastructure powered by AI to impact the entire semiconductor industry.

Christophe also described the broad impact PDF has on the entire semiconductor supply chain. The substantial design and manufacturing challenges the industry is facing can only be addressed with broad-based collaboration. There are many parts of the puzzle and they all must fit together optimally to make progress. PDF Solutions is a significant catalyst for this broad-based collaboration. PDF believes that doing this can only be done through the use of an open platform. With that open platform PDF is able to integrate with other solution providers used across the semiconductor industry. He shared the graphic below to illustrate the breadth of PDF’s impact and partnerships.

A Short but Potent Video

Dr. Dennis Ciplickas

This is the video that inspired the title of this post. It came from a discussion with Dr. Dennis Ciplickas, who spent 25 years at PDF Solutions. He said, “The time I spent at PDF fundamentally changed how I do my job…Being creative and giving yourself the freedom to be creative, even if you don’t really understand everything about the domain can lead to things you didn’t expect. Working at PDF we were always pushing the boundaries…and that led to insights…it’s the value of fearless creativity.” Dennis is now the technical lead manager for silicon product development at one of the leading cloud hyperscalers.

This video also describes the invention by PDF of eProbe, a contactless electrical test system that scans an entire wafer to identify and analyze hot spots. This is one example of how PDF pushes boundaries and improves the quality of semiconductor devices. There are many more such examples.

To Learn More

Dr. John Kibarian

I’ve provided just an overview of how PDF Solutions is changing the semiconductor ecosystem. There is so much more to the story. If advanced design is giving you a headache, you need to know about PDF Solutions. The company can help. You can listen to an informative podcast with Dr. John Kibarian, president, CEO and co-founder of PDF Solutions on SemiWiki here. There’s also a great interview with John on Investment Reports here.

There are a couple of excellent blog posts available on PDF’s website as well:

Secure Data Collaboration in the Semiconductor Industry: Unlocking Innovation Through AI and Connectivity

Perspectives on PDF Solutions Performance in 2024 and Path Forward

And of course, that excellent video I mentioned can be accessed here. And that’s an introduction to PDF Solutions and the value of fearless creativity.


Gartner Top Strategic Technology Trends for 2025: Agentic AI

Gartner Top Strategic Technology Trends for 2025: Agentic AI
by Admin on 08-17-2025 at 10:00 am

Figure 1 Mind the AI Agency Gap

Agentic AI refers to goal-driven software entities—“digital coworkers”—that can plan, decide, and act on an organization’s behalf with minimal supervision. Unlike classic chatbots or coding assistants that respond only to prompts, agentic systems combine models (e.g., LLMs) with memory, planning, tools/APIs, sensing, and guardrails so they can pursue outcomes, not just generate content.

Why now

Vendors are equipping assistants with planning and tool-use. Startups offer agent-building platforms; hyperscalers are weaving agentic capabilities into their stacks. As this matures, AI shifts from advisory to operational—able to analyze data across systems overnight, execute workflows, and report what it finished versus what still needs a human decision.

Opportunity landscape
  1. Performance gains that compound. Agents learn from feedback and environment, so quality and speed improve over time.

  2. Decision acceleration. They scan complex datasets, identify patterns, and take next actions, reducing modeling overhead and time-to-impact.

  3. Workforce augmentation. Natural-language orchestration lets teams manage intricate projects and micro-automations without deep tooling expertise.

  4. Scale and coverage. Multiagent systems coordinate many specialized agents—each perceiving and acting—to tackle goals no single agent could handle.

  5. Experience automation. From purchase to follow-up, agents can personalize outreach, time communications, and launch cross-sell offers, closing the loop without human intermediaries.

Strategic planning assumptions (2028 horizon)
  • One-third of enterprise apps will embed agentic AI (up from <1% in 2024).

  • Machine customers (agentic buyers) will handle about one-fifth of storefront interactions.

  • At least 15% of day-to-day work decisions will be made autonomously.

What changes in practice

Workflows will be designed for agents first, with humans inserted at high-value control points. Collaboration becomes tri-directional: humans→agents, agents→agents, and agents→humans. Software developers feel early impact as coding assistants evolve into agents that open tickets, refactor code, run tests, and submit merge requests. In operations, agents reconcile data, tune campaigns, or remediate incidents while you sleep—escalating only what truly needs judgment.

Risks and pitfalls
  • Governance drift. Without a registry, ownership model, and lifecycle controls, organizations can repeat the RPA “bot sprawl” problem.

  • Data quality & security. Agents act from enterprise data and tool access; poor data or weak identity controls can cause harmful actions.

  • Safety threats. Prompt injection, jailbreaks, data exfiltration, and agent-to-agent adversarial behavior demand new defenses.

  • Customer experience missteps. Autonomy can alienate customers if journeys aren’t intentionally designed.

  • Change management. Employees may resist perceived loss of control; roles must be clarified and upskilling funded.

Design principles
  • Agency is a spectrum. Decide, per workflow, what agents can observe, propose, approve, and execute.

  • User-in-the-loop by default. Start with propose/preview modes; graduate to execute-with-revert once reliability metrics pass thresholds.

  • Guardrails first. Enforce scoped permissions, environment sandboxes, rate limits, and bounded tool catalogs. Require provenance logging for every agent action.

  • Explainability & monitoring. Track goals, plans, tool calls, outcomes, and self-critique notes; alert on drift and unusual chains of actions.

  • Composable architecture. Use an orchestration layer that connects apps, data, identity, EPM/ITSM, and observability—so agents act through governed interfaces.

Near-term actions (next 6–12 months)
  1. Map candidate workflows where scale/latency matter and high-quality data already exists (support ops, marketing ops, finance close, IT service, supply planning).

  2. Define levels of agency for each: observe → recommend → execute with human approval → execute with rollback.

  3. Stand up an “AgentOps” discipline: registry, versioning, policy as code, red-teaming, simulation testing, and automated kill-switches.

  4. Harden identity and access. Give every agent a first-class identity, least-privilege roles, secrets management, and audit trails.

  5. Measure value. Instrument outcomes (cycle time, error rate, SLA adherence, revenue lift) and require business owners for every agent.

  6. Pilot multiagent patterns. Try specialist swarms (planner, tool-user, reviewer) with explicit protocols for delegation and critique.

Bottom line

Agentic AI moves enterprises from “generating insights” to taking action. The advantage goes to leaders who embed agency into architecture and governance—treating agents as Tier-1 digital coworkers with clear scopes, telemetry, and accountability—so performance scales without sacrificing safety, trust, or customer experience.

Access the Gartner whitepaper here.


AMAT China Collapse and TSMC Timing Trimming

AMAT China Collapse and TSMC Timing Trimming
by Robert Maire on 08-17-2025 at 10:00 am

Robert Maire

– AMAT has OK Q but horrible guide as China & Leading edge drop
– China finally chokes on indigestion & export issues -$500M hit
– TSMC trims on fab timing causing leading edge to slow -$500M hit
– Cycle which had slowed to single digits has rolled over to negative

AMAT guides down for big miss on Q4 expectations

Revenue came in at $7.3B and non GAAP EPS was $2.48 however guidance for the current quarter was dismal at best at $6.7B +- $500M and EPS of $2.11 +-$0.20, way below expectations.

China and leading edge will each be off by $500M

China will see a drop of about $500M due to indigestion, slowing, export licensing etc;.

The China slowing was somewhat inevitable as the numbers have been way too high for way too long but the rapidity of the drop is surprising and seems to suggest that management was caught by surprise as it was clearly unexpected.

Leading edge logic/foundry is also going to be off by $500M as management blamed timing of fab projects for the “non-linearity”, a nice way of saying a non expected drop. Management also seems to be caught by surprise by this drop as well.

Taken together this “surprise” drop of $1B was slightly offset by some better performance in AGS but far from enough to offset the large drop.

TSMC likely slowing near term

We have said a number of times that TSMC is so way far ahead of both Intel & Samsung that they can afford to take their foot off the gas of capex.

In addition there may also be some lumpiness of fab timing as indicated by management.

Has the spend cycle rolled over?

It certainly feels like the spend cycle has just unceremoniously rolled over.

We could be seeing the end of the long and strong China capex spend which has kept the semi equipment industry in goods times.

In addition with the bleeding edge consolidating to one player, TSMC, that will likely make the business much more lumpy and likely drive down margins as TSMC wields all the power and can dictate to equipment companies.

Memory is good but only in HBM and only until every manufacturer gets up to speed and oversupplies the commodity market again

The stocks

The stocks have been doing well despite the warning signs that have been cropping up.

Most of the chip equipment companies have seen their valuations at record levels.

AMATs report is not just another warning sign its hard evidence of slower times ahead

The stocks have been caught up in the rising tide of AI and data center capex but they are not 100% correlated to the same dynamic and this may be the point of divergence.

AMAT was off 14% in the aftermarket and we will likely see a lot of investors bail on the stock and the group in general as reality hits home.

We are also somewhat surprised that Applied was so surprised by this “Double Whammy” of slow downs and didn’t see either one coming. It doesn’t exactly give us a lot of faith in prediction capabilities.

Obviously there will be collateral damage on LRCX, KLAC and ASML among others. But Applied likely deserves a bit more of a downside hit due to the lack of foresight.

We don’t see a lot of reason to buy on the dip as it may take a while to find its new support level and we may see a rearrangement of investors in the mean time.

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.

We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Is a Semiconductor Equipment Pause Coming?

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs


CEO Interview with Russ Garcia with Menlo Micro

CEO Interview with Russ Garcia with Menlo Micro
by Daniel Nenni on 08-17-2025 at 8:00 am

RussGarcia MenloCEO

Russell (Russ) Garcia is a veteran technology executive with over 30 years of leadership experience in semiconductors, telecommunications, and advanced electronics. As CEO of Menlo Microsystems, he has led the commercialization of disruptive MEMS switch technology across RF, digital, and power systems.

Previously, Russ founded the advisory firm nGeniSys, served as an Executive in Residence at GE Ventures, and held senior leadership roles at Microsemi, Texas Instruments, and Silicon Systems. He also served as CEO of WiSpry and u-Nav Microelectronics, where he oversaw the launch of the industry’s first single-chip GPS device. Russ remains active as a board member and industry advisor.

Tell us about your company

Menlo Micro is setting a new standard in chip-based switch technology with its Ideal Switch by addressing the limitations of traditional electromechanical relays (EMRs) and solid-state (SS) switches.

Following the path of other disruptive innovators, our RF products are enabling customers in high-growth markets, particularly AI, aerospace, and defense to miniaturize their systems while enhancing performance, capability, and reliability. Driven by AI-fueled growth in data and the xPUs supporting the expansion, we’re responding to customer demand by delivering best-in-class miniature RF switch products – necessary for testing existing and future generations of high-speed digital data buses – and scaling to support increased adoption among the top semiconductor manufacturers.  We’re also expanding our high speed and high-performance RF switch products into the aerospace and defense sectors with engagements with top defense, radar, and radio OEMs.  With product adoption accelerating in the RF segment, the company is developing and positioning a new smart power control product platform to expand in AC/DC power distribution and control to meet a growing demand in microgrids, data centers, and factory automation.

Our technology overcomes system-level bottlenecks caused by traditional switching, enabling customers to push performance boundaries such as accelerating AI GPU testing, delivering step-function improvements in size, weight, power, and performance for satellite communications beamforming, enhancing filtering in mobile radios, reducing energy consumption in factory automation, and improving fault detection and power distribution in energy infrastructure.

What problems are you solving?

Across industries, engineers face critical limits with traditional switching technologies. EMRs are large, slow, and prone to mechanical wear. SS switches suffer from high on-resistance, leakage, and heat generation, which limits scalability, reliability, and efficiency.

In semiconductor testing, switch performance directly affects test speed, accuracy, and cost. Traditional switches degrade signals and limit bandwidth, increasing complexity and slowing time-to-market. Aerospace and defense systems demand rugged, reliable switches that meet tight size, weight, and power constraints, yet traditional options lack durability or require bulky protection. Power systems, from industrial automation to energy grids, face thermal inefficiencies that drive overdesign, and slow switching speed limits responsiveness to system faults.

Menlo’s technology is unique because as it is a true metallic conductor rather than a semiconductor, it delivers near-zero on-resistance, ultra-low power loss, and minimal heat generation. This eliminates the need for heat sinks and complex cooling, significantly improving thermal and power efficiency.

Built on a MEMS process, it achieves chip-scale integration, enabling up to 10x or more reductions in footprint and higher channel density for compact, scalable designs. It maintains reliable operation across extreme environments, from cryogenic to +150°C, and withstands shock and vibration, making it ideal for mission-critical applications.

With billions of cycles and no mechanical degradation, its long life combined with low power consumption and minimal thermal management reduces total cost of ownership through fewer replacements, simpler designs, and lower maintenance.

By solving the longstanding trade-offs between speed, power, size, and reliability, Menlo enables engineers to build smaller, faster, more energy-efficient, and reliable RF and power control systems.

What application areas are your strongest?

Our platform is strongest in high-performance industries, thanks to its broadband linearity, from DC to mmWave, and ultra-low contact resistance.

While our platform supports a wide range of demanding applications, from RF to power switching, one of Menlo’s fastest-growing areas is high-speed digital test. We’ve built a unique position by enabling high-integrity test solutions for advanced interfaces like PCIe Gen6 at up to 64 Gbps. Our switches offer a rare combination of broadband linearity from DC to mmWave, a compact footprint, and low on-resistance, ideal for both DC and high-speed environments. This dual capability allows customers to consolidate hardware, reduce signal distortion, and improve test density, improving ROI and lowering total cost of ownership. With proven reliability across billions of cycles, our solutions also minimize maintenance and system downtime – driving our growing market share in semiconductor test, especially among companies working on the next wave of AI processors, GPUs, and data center chipsets.

Looking ahead, Menlo is actively developing its next generation of switches to support PCIe Gen7 and Gen8, and scaling data rates of 128 and 256 Gbps. This roadmap is driven in close collaboration with our customers to align with their next-gen test infrastructure needs.

Beyond test, our innovations in high-speed switching are creating leverageable product platforms for adjacent markets. In aerospace and defense, for example, we’re applying this same high frequency control switching capability to ruggedized environments, where high performance, fast actuation, and extreme reliability are critical, such as phased array radar, electronic warfare, and advanced power protection systems.

How do customers normally engage with your company?

Collaboration is core to our approach. Because our technology supports everything from testing to deployment and optimization, we engage early and often, working not just to meet needs, but to anticipate them. Our team strives to “see around corners,” aligning our innovations with where the industry is headed.  To do this we create strong working partnerships with our customers – when our customers succeed through Menlo product integration, we succeed.

A strong example of this model is the development of the MM5620 switch. In 2023, we partnered with leading GPU and AI chip manufacturers to understand the growing challenges in semiconductor testing. As demands on AI chips, xPUs, and custom ASICs surged, legacy switching became a clear bottleneck, resulting in longer test cycles, increased complexity, and delayed time-to-market.

These insights led to the MM5620: a high-speed, high-linearity switch [array] delivering near-zero insertion loss, ultra-low contact resistance, and exceptional linearity from DC to mmWave. This allows next-gen device testing without compromising signal integrity or accuracy. This is a step-change in test efficiency, with customers reporting 2x faster test times, simplified hardware, lower overhead, and reduced consumables, key reasons top semiconductor companies choose to collaborate with us. Building on this success, we continue to partner with AI and high-performance computing leaders to help them stay ahead in a fast-moving, competitive market.

What keeps your customers up at night?

Our customers operate in industries like semiconductor supply chain, aerospace & defense, communications infrastructure and energy infrastructure where any failure or signal degradation can lead to significant financial impact, operational downtime, and safety risks. The increasing complexity and miniaturization of modern electrical systems amplify the vulnerabilities inherent in legacy switching technologies.

As system architectures demand higher bandwidth, faster switching speeds, and tighter thermal budgets, the tolerance for insertion loss, contact resistance, and thermal dissipation issues are rapidly diminishing. Consequently, customers are under significant pressure to mitigate these risks without compromising performance and reliability, while reducing total cost of ownership. This dynamic is driving customers to collaborate with us on current product offering adoption as well as their next-generation electronic systems.

What does the competitive landscape look like and how do you differentiate?

The promise of a true MEMS switch, i.e., a tiny, fast, efficient mechanical conductor, rather than a semiconductor, has long been recognized. However, scalability has been the major barrier. Over 30 companies have attempted to commercialize MEMS switches only to fail due to material and manufacturing challenges. Semiconductor fabs rely on materials like Silicon (a partial conductor) or soft metals, which cannot deliver the durability and reliability required for high-cycle mechanical elements.

Backed by R&D at GE, we developed a proprietary metal alloy system engineered to be highly conductive and mechanically robust for the device actuator and further integrated the alloy with a metal material system for reliable conductive contacts. This breakthrough in metallurgy enables the production of ultra-conductive, highly reliable switches capable of billions of actuations, delivering unmatched linearity from DC to mmWave with the highest power density per chip on the market. Process on glass substrates with metal-filled hermetic vias, our MEMS device delivers best-in-class RF and power performance.  This core construction differentiates us from competitors who either rely on semiconductor switches, limited by non-linearities, high losses and heat, or EMR technologies that lack scalability and ruggedness. It’s the integrated system that delivers the combined best-in-class performance, at both high power and high frequency, in a miniature chip scale package.

What new features/technology are you working on?

In April 2025, we launched the MM5230, a high-performance RF switch developed with key customers to meet the demands of next-gen systems. Combining ultra-high RF performance with manufacturability, it supports advanced military communications and high-density IC parallel testing, delivering the performance, reliability, and versatility critical to today’s most demanding applications.

In June, we followed with the MM5625, engineered to dramatically increase test throughput with increased channel density in high-speed, high-volume environments such as AI GPU testing. It enables faster test cycles, greater parallelism, and improved data processing, empowering leading semiconductor manufacturers to expand testing capacity, accelerate time-to-market, and reduce total cost of ownership.

Looking ahead, Menlo Micro is working with customers on next-gen switches for PCIe Gen7 and beyond, as well as mmWave products up to 80 GHz to support advanced aerospace and defense RF systems. We’re also advancing a robust power control roadmap for AI IC testing, high-voltage DC in data centers, and smart grid and industrial automation.

In parallel, we’re partnering with the U.S. Navy and the Defense Innovation Unit (DIU) to develop 1000VDC/125A modules for 10MWe advanced circuit breaker systems in micro-nuclear reactors. These compact, low-heat modules offer 5–6X reductions in size and weight and will extend to mission-critical commercial sectors like data centers, industrial automation, and EVs.

Also Read:

CEO Interview with Karim Beguir of InstaDeep

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Bob Fung of Owens Design


Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott

Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott
by Daniel Nenni on 08-15-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by John Elliott, security applications engineer from Cycuity. With 35 years of EDA experience, John’s current focus is on security assurance of hardware designs.

John explains the importance of security coverage in the new global marketplace. He describes what’s needed to perform deep security verification of a design for both known and potentially unknown threats, and why it’s important to achieve good coverage. He also describes how Cycuity’s tools help perform the deep analysis and verification tasks to ensure a design is secure.

Contact Cycuity

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.