Banner 800x100 0810

Silvaco: Navigating Growth and Transitions in Semiconductor Design

Silvaco: Navigating Growth and Transitions in Semiconductor Design
by Admin on 08-22-2025 at 8:00 am

Silvaco Wally Rhines

Silvaco Group, Inc., a veteran player in the EDA and the TCAD space, continues to evolve amid the booming semiconductor industry. Founded in 1984 and headquartered in Santa Clara, California, Silvaco specializes in software for semiconductor process and device simulation, analog custom design, and semiconductor intellectual property. As of September 2025, the company has demonstrated resilience through record financials, strategic acquisitions, leadership shakeups, and product innovations, positioning itself as a key enabler for advanced technologies like AI, photonics, and power devices.

Financially, Silvaco closed 2024 on a high note, reporting record gross bookings of $65.8 million and revenue of $59.7 million for the full year. This momentum carried into 2025, with first-quarter bookings reaching $13.7 million and revenue at $14.1 million, alongside securing nine new customers. The second quarter saw bookings of $12.91 million and revenue of $12.05 million, with 10 new logos in sectors like photonics, automotive, military, foundry, and consumer electronics. Looking ahead, Silvaco projects third-quarter 2025 bookings between $14.0 million and $18.2 million, a 42% to 84% jump from the prior year’s third quarter. These figures reflect sustained demand for its digital twin modeling platform, which aids in yield improvement and process optimization. However, challenges emerged in late 2024, including a $2 million revenue midpoint reduction due to order freezes and pushouts, though the company retains a buy rating from analysts.

Leadership transitions marked a pivotal shift in 2025. On August 21, Silvaco announced a CEO change, with Dr. Walden C. Rhines stepping in as the new chief executive. Rhines, a semiconductor industry veteran, brings expertise from his prior roles at Mentor Graphics and Texas Instruments. This followed the departure of Babak Taheri. Complementing this, Silvaco appointed Chris Zegarelli as Chief Financial Officer effective September 15, 2025. Zegarelli, with over 20 years in semiconductor finance, previously held positions at Maxim Integrated and Analog Devices. These moves come after the March 2025 exit of the former CFO, signaling a focus on stabilizing operations amid growth.

Acquisitions have bolstered Silvaco’s portfolio. In March 2025, it expanded its offerings by acquiring Cadence’s process design kit business, enhancing its TCAD capabilities for advanced nodes. More recently, the company completed the purchase of Mixel Group, Inc., a leader in low-power, high-performance mixed-signal connectivity IP, strengthening its SIP lineup for automotive and IoT applications. On the product front, Silvaco extended its Victory TCAD and digital twin modeling platform to planar CMOS, FinFET, and advanced CMOS technologies in April 2025, enabling next-generation semiconductor development. Its tools, including FTCO™ for fab optimization and Power Device Analysis, continue to drive efficiency.

Partnerships and initiatives underscore Silvaco’s commitment to talent and ecosystem growth. In June 2024, it collaborated with Purdue, Stanford, and Arizona State universities to enhance semiconductor workforce development. Additionally, Silvaco’s EDA tools were included in India’s Digital India (DLI) Scheme, inviting startups and MSMEs to access them via the ChipIN Centre. Upcoming events include SEMICON India (September 2-4, 2025) and IP-SoC China (September 11, 2025), where Silvaco will showcase its innovations.

Looking forward, Silvaco’s trajectory appears promising despite industry headwinds like talent shortages and rising design costs. With 46 new customers in 2024 and continued expansion, the company is well-positioned to capitalize on the AI chip market’s growth. As it integrates recent acquisitions and leverages new leadership, Silvaco aims to deliver value in a sector projected to hit $383 billion by 2032. Investors will watch closely as it navigates these dynamics toward sustained profitability.

About Silvaco Group, Inc.
Silvaco is a provider of TCAD, EDA software, and SIP solutions that enable semiconductor design and digital twin modeling through AI software and innovation. Silvaco’s solutions are used for semiconductor and photonics processes, devices, and systems development across display, power devices, automotive, memory, high performance compute, foundries, photonics, internet of things, and 5G/6G mobile markets for complex SoC design. Silvaco is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Egypt, Brazil, China, Japan, Korea, Taiwan, Singapore and Vietnam. Learn more at silvaco.com.

Also Read:

Analysis and Exploration of Parasitic Effects

Silvaco at the 2025 Design Automation Conference #62DAC

TCAD for 3D Silicon Simulation


Taming Concurrency: A New Era of Debugging Multithreaded Code

Taming Concurrency: A New Era of Debugging Multithreaded Code
by Admin on 08-21-2025 at 10:00 am

tech paper 800x600 02

As modern computing systems evolve toward greater parallelism, multithreaded and distributed architectures have become the norm. While this shift promises increased performance and scalability, it also introduces a fundamental challenge: debugging concurrent code. The elusive nature of race conditions, deadlocks, and synchronization bugs has plagued developers for decades. The complexity of modern software systems—spanning millions of lines of code, multiple threads, and distributed nodes—demands a radical transformation in debugging methodology. This is where technologies like time travel debugging, thread fuzzing, and multi-process correlation step in.

Multithreaded applications are notoriously difficult to reason about because their behavior is often non-deterministic. A program might pass every test during development, only to fail sporadically in production due to a subtle timing bug. Race conditions arise when the outcome of code depends on the unpredictable ordering of thread execution. Deadlocks occur when threads wait on each other indefinitely. Such defects are not only hard to reproduce, but even harder to isolate and correct using conventional debugging tools.

Historically, debugging concurrency issues involved a laborious process: reproduce the failure, guess at its cause, add logging, recompile, and try again. This loop could span weeks, especially when the problem occurred in customer environments where direct access to the system was limited. Engineers would often spend more time trying to reproduce bugs than actually fixing them.

Undo, a company specializing in advanced debugging technologies, proposes a modern solution: time travel debugging (TTD). TTD allows developers to capture a complete execution trace of their application, enabling them to step forward and backward through code execution like a video replay. Every line executed, every memory mutation, and every variable value is preserved in a recording file. With this, engineers can inspect the application at any point in time, using standard debugging tools and commands. A single recording becomes a 100% reproducible snapshot of the program’s behavior, regardless of the environment or timing.

The power of TTD is amplified when paired with thread fuzzing. This technique intentionally perturbs the scheduling of threads during testing to make hidden concurrency bugs more likely to appear. Unlike random bug hunting, thread fuzzing is systematic. It can simulate scenarios such as thread starvation, lock contention, and data race conditions—revealing defects that may occur only once in a million executions. Undo’s feedback-directed thread fuzzing, introduced in version 8.0, takes this further by identifying shared memory locations accessed by multiple threads and targeting them for more frequent thread switching. This significantly increases the likelihood of exposing race conditions.

Another essential feature is multi-process correlation, which enables simultaneous debugging of multiple cooperating processes. Whether processes communicate via shared memory or sockets, Undo captures every inter-process read and write. By analyzing this “shared memory log,” developers can track exactly which process modified a variable and when. With commands like ublame and ugo, they can jump directly to the code responsible for a data inconsistency—an otherwise daunting task in distributed systems.

These technologies represent a paradigm shift. Debugging is no longer about hoping to reproduce a failure, but about deterministically analyzing it after it’s happened—once and for all. Major technology companies such as SAP, AMD, Siemens EDA, Palo Alto Networks, and Juniper Networks have adopted Undo to accelerate their debugging cycles and improve software reliability.

In an age where concurrency is a feature, not an option, debugging tools must evolve to match the complexity they confront. Undo’s time travel debugging, thread fuzzing, and multi-process correlation offer a robust, scalable solution. They don’t just make debugging faster—they make the previously impossible, possible. And in doing so, they free developers to focus less on chasing ghosts and more on building the future.

Read the full technical paper here.

Also Read:

Video EP7: The impact of Undo’s Time Travel Debugging with Greg Law

CEO Interview with Dr Greg Law of Undo


Perforce Webinar: Can You Trust GenAI for Your Next Chip Design?

Perforce Webinar: Can You Trust GenAI for Your Next Chip Design?
by Mike Gianfagna on 08-21-2025 at 6:00 am

Perforce Webinar Can You Trust GenAI for Your Next Chip Design?

GenAI is certainly changing the world. Every day there are new innovations in the use of highly trained models to do things that seemed impossible just a short while ago. As GenAI models take on more tasks that used to be the work of humans, there is always a nagging concern about accuracy and bias. Was the data used to train the model correct?  Do we know where it all came from, and can the sources be trusted? In chip design, these concerns are certainly there as well. Errors introduced this way can be hard to find and extremely expensive to fix. If you’re not worried about this, you should be.

Perforce will hold a webinar on September 10 that deals with this class of problem in a predictable and straight-forward way. You can register for the webinar here. Here is a preview of some of the topics that will be addressed in the Perforce Webinar to answer the question – can you trust GenAI for your next chip design?

The Webinar Presenter

Vishal Moondhra

Vishal Moondhra is the VP of Solutions Engineering at Perforce IPLM. He brings over 20 years of experience in digital design and verification.  Vishal brings the right skillset and perspective to the webinar. He is a recognized expert in his field and often speaks on current challenges in semiconductor design.

His experience in engineering and senior management includes innovative startups like LGT and Montalvo and large multi-national corporations such as Intel and Sun. In 2008, Vishal co-founded Missing Link Tools, which built the industry’s first comprehensive design verification management solution, bringing together all aspects of verification management into a single platform. Missing Link was acquired by Methodics in 2012 and Methodics was acquired by Perforce in 2020.

Topics to be Covered

Vishal begins by framing the problem. What are the unique challenges and risks associated with using GenAI in the semiconductor industry? And how do these risks compare to software development? There are four areas he sets up for further investigation. They are:

  • Liability: Sensitivity around ownership and/or licensing of data used for training
  • Lack of Traceability: Challenges in tracing exactly how a model was trained
  • High Stakes: Extremely high cost of errors & impact on production timelines
  • Data Quality Concerns: Mixing external IP with internal, fully vetted datasets

A lot of the discussion that follows focuses on design flows and provenance. We all know what a design flow is, but to be clear, here is a definition of provenance:

The history of the ownership of an object, especially when documented or authenticated.

So, a key point in trusting GenAI for chip design is trusting the GenAI models. And trusting those models means knowing how the models were trained, and what data was used for that training. More broadly, having a full picture of the provenance of all the IP and training data that goes into your design. The problem space is actually larger than this. The figure below provides a more comprehensive view of all the information that must be tracked and managed to achieve the required level of trust for all aspects of the design.

What Needs to be Managed

Vishal discusses some of the elements required to achieve trust in using AI to train internal models. These items include:

  • Clear and auditable data provenance for all training datasets
  • Complete traceability of all IPs and IP versions
  • Secure and compliant use of both internal and external IP
  • Enhance trust in AI adoption using highly sensitive IP

Vishal then details the processes required to setup an AI training/machine learning pipeline. He also provides details about how Perforce IPLM can be used to implement a comprehensive system to track and validate all aspects of a design to achieve a level of trust and confidence that will provide the margin of victory for any complex design project.

To Learn More

Chip design has become far more complex. It’s not just the design complexity and that all-important “right first time” mandate. It now also includes massive IP and supporting data from a worldwide supply chain. Increasing reliance on GenAI models to accelerate the whole process brings in the additional requirements to examine how those models are trained and what data was used in the process.

All of this is important to understand and manage. This webinar from Perforce provides significant details about how to set up the needed processes and how to use the information effectively. I highly recommend you register to attend. The event will be broadcast on Wednesday, September 10 at 1:00 p.m. EDT. You can register here. Save your seat for the Perforce webinar where you will find out how you can trust GenAI for your next chip design.

Also Read:

Perforce at DAC, Unifying Software and Silicon Across the Ecosystem

Building Trust in Generative AI

Perforce at the 2025 Design Automation Conference #62DAC


Weebit Nano Moves into the Mainstream with Customer Adoption

Weebit Nano Moves into the Mainstream with Customer Adoption
by Mike Gianfagna on 08-20-2025 at 10:00 am

Weebit Nano Moves into the Mainstream with Customer Adoption

Disruptive technology typically follows a path of research, development, early deployment and finally commercial adoption. Each of these phases are difficult and demanding in different ways. No matter how you measure it, getting to the finish line is a significant milestone for any company. Weebit Nano is disrupting the way embedded non-volatile memory is implemented with its resistive RAM, or ReRAM technology. The company recently announced some new developments that indicate that the all-important move to commercial adoption is happening. Let’s look at the details as Weebit Nano moves into the mainstream with customer adoption.

Why This is Significant

Embedded non-volatile memory (NVM) finds application in many high-growth markets including automotive, consumer, medical/wearable, industrial and the ubiquitous application of AI inferencing on the edge to name just a few. Flash has been the workhorse here for many years. But flash is hitting limits such as power consumption, speed, endurance and cost. It is also not scalable below 28nm.

Newer technologies such as ReRAM offer a way around these issues to keep the innovation pipeline going. Weebit Nano was incorporated in 2015 with a vision of creating a leap forward in storage and computing capabilities to drive the proliferation of intelligent devices. Its development and validation in the market of ReRAM has been an important part of that journey. This is worth watching as ReRAM delivers a combination of high performance, low power, and low cost that is not achievable by other NVMs.

You can learn more about Weebit Nano’s journey on SemiWiki here.

What Was Announced

As a public company traded on the Australian Securities Exchange  (ASX), Weebit Nano recently issued a Q4 FY25 Quarterly Activities Report. A key announcement was that the technology transfer work to onsemi was progressing well with tapeout of first demo chips embedded with Weebit ReRAM expected this year. For background, NASDAQ-100 onsemi is a tier-1 IDM, designing and manufacturing both its own products and supporting a few select product companies.

The announcement went on to report that after tapeout and qualification, Weebit ReRAM will be available in onsemi’s Treo™ Platform. This provides  a cost-effective, low-power NVM suitable for use in high temperature applications such as automotive and industrial. Treo is an analog/mixed signal platform available from onsemi. You can learn more about the Treo Platform here.

Weebit Nano also announced that it is on track to complete qualification at DB HiTek this calendar year as well. DB HiTek, formerly Dongbu HiTek, is a semiconductor contract manufacturing and design company headquartered in South Korea. DB HiTek is one of the major contract chip manufacturers, alongside TSMC, Samsung Electronics, GlobalFoundries, and UMC. It is also the second-largest foundry in South Korea, behind Samsung Electronics. You can learn more about DB HiTek here.

It was also reported that DB HiTek is demonstrating Weebit’s ReRAM, embedded in a test chip manufactured at DB HiTek, at industry events such as the PCIM conference in Germany – Europe’s largest power semiconductor exhibition. The edge AI demonstration, running on a DB HiTek 1Mb ReRAM module in silicon, showed an application of gesture recognition and was effective to showcase for potential customers the advantages of integrating ReRAM on-chip.

It was also reported that technical evaluations and commercial negotiations are progressing with more than a dozen foundries, IDMs, and product companies. Weebit remains well-positioned to meet its targets, securing multiple licensing agreements before the end of the calendar year.

Related to this, another key announcement is that Weebit Nano signed a design license agreement with its first product customer. The customer, a U.S.-based company, plans to incorporate Weebit’s technology into select security-related applications. This is an important milestone in Weebit Nano’s transition to commercial adoption, with more licensing deals expected soon.

To Learn More

Coby Hanoch

The announcement went on to provide impressive financial details for the company. There are also comments from Coby Hanoch, CEO of Weebit Nano that are definitely worth reading. Almost every design requires some type of embedded non-volatile memory. In the face of advancing process nodes, Weebit Nano’s ReRAM is one to watch. You can read the entire announcement here. And that’s how Weebit Nano moves into the mainstream with customer adoption.

Also Read:

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch

Weebit Nano is at the Epicenter of the ReRAM Revolution


Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory

Everspin CEO Sanjeev Agrawal on Why MRAM Is the Future of Memory
by Admin on 08-20-2025 at 8:00 am

Toggle MRAM Everspin SemiWiki

Everspin’s recent fireside chat, moderated by Robert Blum of Lithium Partners, offered a crisp look at how the company is carving out a durable niche in non-volatile memory. CEO Sanjeev Agrawal’s core message was simple: MRAM’s mix of speed, persistence, and robustness lets it masquerade as multiple memory classes, data-logging, configuration, and even data-center memory—while solving headaches that plague legacy technologies.

At the heart of Everspin’s pitch is performance under pressure. MRAM reads and writes in nanoseconds, versus microsecond-class writes for NOR flash. In factory automation, that difference is existential: robots constantly report state back to PLCs, and a sudden power loss with flash can scrap in-process work. With MRAM, the system snapshots in nanoseconds, so when power returns, machines resume exactly where they left off. The same attributes—instant writes, deterministic behavior, and automotive-grade temperature tolerance (−40°C to 125°C) translate into reliability for EV battery management systems, medical devices, and casino gaming machines (which must log every action multiple times).

Everspin organizes its portfolio into three families. “Persist” targets harsh environments and mission-critical data capture. “UNESYS” combines data and code for systems that need high-speed configuration—think FPGAs that today rely on NOR flash and suffer painfully slow updates. Agrawal’s thought experiment is vivid: a large configuration image that takes minutes on NOR could be near-instant on MRAM reboots and rollbacks included. Finally, “Agilus” looks forward to AI and edge-inference use cases, where MRAM’s non-volatility, SRAM-like speed, and low leakage enable execute-in-place without the refresh penalties and backup complexities of SRAM.

The business model is intentionally diversified: product sales; licensing and royalties (including programs where primes license Everspin IP, qualify processes, and pay per-unit royalties); and U.S. government programs that need radiation-tolerant, mission-critical memory. On the product side, Everspin ships both toggle (field-switched) MRAM and perpendicular spin-transfer torque (STT-MRAM), the latter already designed into data-center and configuration solutions. The customer base spans 2,000+ companies—from Siemens and Schneider to IBM and Juniper served largely through global distributors (about 90% of sales), with Everspin engaging directly early in the design-in cycle.

Strategically, the company sees a major opening as NOR flash stalls around the 40 nm node and tops out at modest densities. Everspin is shipping MRAM that “looks like NOR” at 64–128 Mb today, with a roadmap stepping from 256 Mb up to 2 Gb, aiming to land first high-density parts in the 2026 timeframe. Management sizes this replacement and adjacent opportunity at roughly $4.3B by 2029, and claims few credible MRAM rivals at these densities. Competitively, a South Korea–based supplier fabbed by Samsung offers a single native 16 Mb die (binned down to 4 Mb) with a longer-dated jump to 1 Gb, while Avalanche focuses on aerospace/defense with plans for 1 Gb STT-MRAM; Everspin’s pitch is breadth of densities, standard packages (BGA, TSOP), and an active roadmap.

Beyond industrial and config memory, low-earth-orbit satellites are a fast-emerging tailwind. With tens of thousands of spacecraft projected, radiation-tolerant, reliable memory is paramount; Everspin cites work with Blue Origin and Astro Digital and emphasizes MRAM’s inherent radiation resilience when paired with hardened front-end logic from primes. Supply-chain-wise, the company spreads manufacturing across regions: wafers sourced from TSMC (Washington State and Taiwan Fab 6) are finished in Chandler, Arizona; packaging and final test occur in Taiwan. For STT-MRAM, GlobalFoundries handles front end (Germany) and MRAM back end (Singapore), with final assembly/test again in Taiwan, diversifying tariff and logistics exposure.

Financially, Everspin reported another profitable quarter, slightly ahead of guidance and consensus, citing early recovery from customers’ post-pandemic inventory overhang and growing design-win momentum for its XSPI family (with revenue contribution expected in 2025). The balance sheet remains clean—debt-free with roughly $45M in cash—and the go-to-market team has been bolstered with a dedicated VP of Business Development and a new VP of Sales recruited from Intel’s Altera business.

Agrawal’s long-view is unabashed: MRAM is “the future of memory.” Whether replacing NOR in configuration, displacing SRAM at the edge for inference, or anchoring radiation-tolerant systems in space, Everspin’s thesis is that one versatile, non-volatile, fast, and power-efficient technology can simplify architectures while cutting energy use—an advantage that grows as AI workloads proliferate. If execution matches the roadmap, the company stands to be the domestic standard-bearer for MRAM across a widening set of markets.

See the full transcript here.

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


A Principled AI Path to Spec-Driven Verification

A Principled AI Path to Spec-Driven Verification
by Bernard Murphy on 08-20-2025 at 6:00 am

NLP versus LLM choice min

I have seen a flood of verification announcements around directly reading product specs through LLM methods, and from there directly generating test plans and test suite content to drive verification. Conceptually automating this step makes a lot of sense. Carefully interpreting such specs even today is a largely manual task, requiring painstaking attention to detail in which even experienced and diligent readers can miss or misinterpret critical requirements. As always in any exciting concept, the devil is in the details. I talked with Dave Kelf (CEO at Breker) and Adnan Hamid (President and CTO at Breker) to better understand those details and the Breker approach.

Created with Image Creator from Bing

The devil in the specs

We like to idealize specs as one or more finalized documents representing the ultimate description of what is required from the design. Of course that’s not reality, maybe never was but certainly not today where specs change rapidly, even during design, adapting to dynamically evolving market demands and implementation constraints.

A more realistic view according to Adnan is a base spec supplemented by a progression of engineering change notices (ECNs) each reflecting incremental updates to the prior set of ECNs on top of the base spec. Compound that with multiple product variants covering different feature extensions. As a writer I know that even if you can merge a set of changes on top of a base doc, the merge may introduce new problems in readability, even in critical details. Yet in review and updates, reviewers naturally incline to edits considered in a local context rather than trying to evaluate each edit in the context of the whole doc and spec variants on each update.

Worse yet in this fast-paced spec update cycle, clarity of explanation often takes a back seat to urgent partner demands, especially when multiple writers contribute to the spec. In the base doc and especially in ECNs developers/reviewer lean heavily on assumptions that a “reasonable” reader will not need in-depth explanations for basic details. “Basic” in this context is a very subjective judgement, often assuming more in-depth domain expertise as ECNs progress. In the domain we are considering this is specialized know-how. How likely is it that a trained LLM will have captured enough of that know-how? Adnan likes to point to the RISC-V specs as an example illustrating all these challenges, with a rich superstructure of extensions and ECNs which he finds can in places imply multiple possible interpretations that may not have been intended by the spec writers.

Further on the context point, expert DV engineers read the same specs but they also talk to design engineers, architects, apps engineers, marketing, to gather additional input which might affect expected use cases and other constraints. And they tap their own specialized expertise developed over years of building/verifying similar designs. None of this is likely to be captured in the spec or in the LLM model.

Automated spec analysis can be enormously valuable in consolidating ECNs with the base spec to generate a detailed and actionable test plan, but that must then be reviewed by a DV expert and corrected through additional prompts. The process isn’t hands-free but nevertheless could be a big advance in DV productivity.

The Breker strategy

Interestingly Breker test synthesis is built on AI methods created long before we had heard of LLMs or even neural nets. Expert systems and rule-based planning software were popular technologies found in early AI platforms from IBM and others and became the foundation of Breker test synthesis. Using these methods Breker software has already been very effective and in production for many years, auto-generating very sophisticated tests from (human-interpreted) high level system descriptions. Importantly, while the underlying test synthesis methods are AI-based they are much more deterministic than modern LLMs, not prone to hallucination.

Now for the principled part of the strategy. It is widely agreed that all effective applications of AI in EDA must build on a principled base to meet the high accuracy and reliability demands of electronic design. For example production EDA software today builds on principled simulation, principled place and route, principled multiphysics analysis. The same must surely apply to functional test plan definition and test generation for which quality absolutely cannot be compromised. Given this philosophy, Breker are building spec interpretation on top of their test synthesis platform along two branches.

The first branch is their own development around natural language processing (NLP), again not LLM-based. NLP also pre-dates LLMs, using statistical methods rather than the attention-based LLM machinery. Dave and Adnan did not share details, but I feel I can speculate a little without getting too far off track. NLP-based systems are even now popular in domain specific areas. I would guess that a constrained domain, a specialist vocabulary, and a presumed high level of expertise in the source content can greatly simplify learning versus requirements for general purpose LLMs. These characteristics could be an excellent fit in interpreting design behavior specs.

Dave tells me they already have such a system in development which will be able to read a spec and convert it into a set of Breker graphs. From that they can directly run test plan generation and synthesis, with support for debug, coverage, portability to emulation, all the capabilities that the principled base already provides. They can also build on the library of test support functions already in production, for coherency checking, RISC-V compliance and SoC readiness, Arm SocReady, power management and security root of trust checks.  A rich set of capabilities which don’t need to be re-discovered in detail in a general purpose LLM model.

The second branch is partnership with some of the emerging agentic ventures. In general, what I have seen among such ventures is either strong agentic expertise but early in building verification expertise, or strong verification expertise but early in building agentic expertise. A partnership between a strong agentic venture and a strong verification company, seems like a very good idea. Dave and Adnan believe that what they already have in their Breker toolset, combined with what they are learning in their internal development can provide a bridge between these platforms, combining full-blown LLM-based spec interpretation, and the advantages of principled test synthesis.

My takeaway

This is a direction worth following. All the excitement and startup focus in verification is naturally on LLMs but today accuracy levels are not yet where they need to be to ensure production quality test plans and test generation without expert DV guidance. We need to see increased accuracy and we need to gain experience in the level of expert effort required to bring auto-generated tests to signoff quality.

Breker is investing in LLM-based spec-driven verification, but like other established verification providers must meanwhile continue to support production quality tools and advance the capabilities of those tools. Their two-pronged approach to spec interpretation aims to have the best of both worlds – an effective solution for the near term through NLP interpretation and an LLM solution for the longer term.

You can learn more about the principled Breker test synthesis products HERE.


Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY

Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY
by Daniel Nenni on 08-19-2025 at 10:00 am

MIPI Framework Mixel

The white paper “Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY” details the latest developments in these two critical high-speed interface technologies, highlighting how they evolve to meet modern demands in camera and display systems across automotive, industrial, healthcare, and XR applications.

The evolution of MIPI D-PHY and MIPI C-PHY reflects the ongoing push toward higher performance and power-efficient data interfaces in camera and display systems. Originally developed for the mobile industry, both PHY types have significantly matured to support diverse applications in automotive, healthcare, industrial vision, and extended reality (XR). These advancements are essential to accommodate surging data rates driven by higher resolutions, expanded frame rates, and real-time image processing.

MIPI D-PHY, introduced in 2009, has incrementally increased its per-lane throughput from 2.5 Gbps to 11 Gbps over several specification versions. Key to supporting these higher rates are signal integrity enhancements such as transmitter de-emphasis and receiver Continuous Time Linear Equalization (CTLE), first introduced in v2.0. Version 3.5 added non-linear Decision Feedback Equalization (DFE) to further improve signal performance, especially in the 6–11 Gbps range. These techniques help mitigate channel losses across increasingly complex physical environments including PCB traces, packages, and connectors.

The reliability challenges that arose as silicon geometries shrank were tackled by introducing new signaling modes. The original 1.2V LVCMOS signaling used for low-power control became problematic in modern nodes with lower core voltages. MIPI D-PHY responded by offering LVLP mode to lower the voltage swing to 0.95V and ultimately developed the Alternate Low Power (ALP) mode. ALP mode discards the LP transmitter/receiver entirely, reusing the high-speed circuits for low power signaling. This not only improves leakage characteristics and reduces IO loading but also enables the PHY to operate over longer channels, up to 4 meters.

The ALP signaling introduces the ALP-00 state, a collapsed differential mode where both wires are grounded, minimizing power during idle periods. Wake pulses and high-speed bursts are coordinated using embedded control signals, enhancing synchronization. Notably, ALP also supports fast lane turnaround, which significantly reduces latency in bidirectional interfaces compared to legacy LP-mode lane switching. Combined with spread spectrum clocking, first introduced in v2.0 to mitigate EMI, MIPI D-PHY’s power and emissions profile is increasingly well-suited for automotive and industrial-grade deployments.

In a major architectural shift, MIPI D-PHY v3.5 introduced Embedded Clock Mode (ECM). In ECM, clock information is no longer carried on a dedicated lane but embedded in the data stream using 128b/132b encoding with clock and data recovery (CDR). This allows the clock lane to be repurposed as a fifth data lane, increasing throughput by 25% in common configurations. ECM also reduces EMI by eliminating the always-on toggling clock line and permits skew-insensitive timing between data lanes. However, the trade-off is reduced backward compatibility: ECM-only PHYs cannot interoperate with older Forwarded Clock Mode (FCM)-only devices.

MIPI C-PHY, launched in 2014, uses a 3-wire lane and a ternary signaling method to achieve efficient data encoding. The original 6-wirestate configuration encoded 16 bits in 7 symbols for an encoding efficiency of 2.28x. As symbol rates increased from 2.5 to 6 Gsps, data rates rose to 13.7 Gbps per lane. Equalization support was expanded in versions 1.2 and 2.0 through advanced TxEQ, CTLE, and various training sequences. Low power features were also introduced, including LVHS, LVLP, and ALP modes, often mirroring MIPI D-PHY enhancements while adapting them to MIPI C-PHY’s unique signaling format.

The landmark change came with MIPI C-PHY v3.0 and the 18-Wirestate mode. This innovation retains the same 3-wire lane interface but increases encoding efficiency to 3.55x by introducing 18 distinct differential states across wire pairs. With this, the PHY can achieve up to 24.84 Gbps per lane on short channels. New encoding schemes and state transitions were developed, with each symbol defined by a 5-bit code representing polarity, rotation, and flip attributes. The additional signaling levels require multi-level slicers in the receiver and increased TX power but enable significantly greater throughput.

The 18-Wirestate system also introduces a more sophisticated lane mapping and control mechanism. By embedding turnaround codes into the last transmitted symbol burst, MIPI C-PHY accelerates lane reversal, improving duplex performance. Furthermore, signal integrity is preserved through careful voltage slicing and receiver sensitivity enhancements, ensuring reliability despite reduced signal-to-noise ratio due to the multi-level signaling.

Together, the continued evolution of MIPI D-PHY and MIPI C-PHY demonstrates the MIPI Alliance’s focus on scalable, forward-compatible solutions that can bridge mobile, automotive, and emerging computing environments.

You can read the full whitepaper here.

Also Read:

Mixel at the 2025 Design Automation Conference #62DAC

2025 Outlook with Justin Endo of Mixel

MIPI solutions for driving dual-display foldable devices


448G: Ready or not, here it comes!

448G: Ready or not, here it comes!
by Kalar Rajendiran on 08-19-2025 at 6:00 am

448G Host Channel Topologies Analyzed

The march toward higher-speed networking continues to be guided by the same core objectives as has always been : increase data rates, lower latency, improve reliability, reduce power consumption, and maintain or extend reach while controlling cost. For the next generation of high-speed interconnects, these requirements are embodied in the development of 448G high-performance SerDes. This will serve as the foundational electrical layer for scaling Ethernet beyond 1.6T, while also enabling other advanced interconnect architectures in AI, storage, and cloud-scale computing.

Meeting the core objectives has become progressively more complex with each generation. Today’s challenges include the diverging performance needs of AI and general networking applications, the distinct technical contributions of multiple standards organizations, and the growing sophistication required in electrical PHY implementations to meet 448G signaling requirements. The push for 448G is taking place in a context where the industry is both motivated to move quickly and tasked with solving a broader set of technical and deployment variables than ever before.

Synopsys provided a status update on this very topic, during a webinar it hosted recently. The webinar was delivered by Kent Lusted, Distinguished Architect and Priyank Shukla, Director of Product Management. This webinar is now accessible on-demand from here.

Industry Readiness and Market Drivers

The readiness for 448G Electrical PHY adoption is high, particularly in AI-driven scale-up and scale-out data center networks where back-end interconnect bottlenecks are already limiting system performance. Operators in these environments are pressing for faster rates with minimal latency and power, making 448G the logical next step. Hyperscalers and large enterprise operators are preparing infrastructure roadmaps that anticipate both short-reach copper-based implementations and longer-reach optical deployments.

From a supply chain perspective, SerDes technology at 224G per lane — the immediate precursor to 448G dual-lane architectures — has matured rapidly, providing a strong technical foundation. This maturity enables early prototyping of 448G PHYs, ensuring that when the standards are finalized, both silicon and system designs will be ready for deployment.

Emerging Standards and Collaborative Progress

Multiple standards bodies are actively shaping the path to 448G Electrical PHYs. The Optical Internetworking Forum (OIF) launched its CEI-448G framework project in July 2024, setting the stage for defining channel characteristics, modulation targets, and reach objectives. The IEEE P802.3dj task force is extending Ethernet standards to 1.6T and 200G-per-lane, with 448G PHYs as critical building blocks. The Ultra Ethernet Consortium (UEC) and UALink are aligning electrical interface specifications with AI-scale fabric requirements, while the Storage Networking Industry Association (SNIA) is hosting workshops to converge perspectives from AI, storage, and networking domains. The Open Compute Project (OCP) continues to drive deployment-oriented specifications, addressing form factors, integration models, and operational considerations for hyperscale adoption.

This collaborative landscape ensures that the final specifications will be both technically robust and deployment-ready, even as each organization brings a unique emphasis—be it Ethernet protocol compliance, electrical interoperability, electro-optical links, AI optimization, or system integration.

Advanced Modulation Schemes

Selecting the optimal modulation for 448G is one of the most significant technical decisions in PHY design. The primary candidates—PAM4, PAM6, CROSS-32, DSQ-32, PR-PAM4, BiDi-PAM4, SE-PAM4, and DMT—offer varying trade-offs between bandwidth efficiency, signal-to-noise ratio, complexity, and compatibility.

PAM4 remains attractive for its backward compatibility and alignment with optical implementations, though it demands higher circuit bandwidth. PAM6 offers some bandwidth relief but at the cost of more complex DSP and reduced noise margin. Two-dimensional constellations like CROSS-32 and DSQ-32 can improve detector margin for certain symbol patterns but require even more sophisticated detection algorithms. Other approaches, such as BiDi-PAM4 and SE-PAM4, aim to maintain I/O count while introducing new signal recovery challenges. The final modulation choice—or set of choices—must balance implementation feasibility with performance goals across AI and non-AI environments.

Next-Generation Channel Designs

Channel topology is a critical determinant of PHY performance at 448G. AI-oriented deployments tend to favor short, low-loss paths such as direct attach copper cables, near-package interconnects, or co-packaged optics (CPO), which simplify equalization and reduce latency. In contrast, front-panel optical modules in general networking often require longer PCB traces, multiple connectors, and possibly retimers, all of which increase signal degradation and complexity in the receiver.

Synopsys’ analysis of 12 channel topologies, in collaboration with Samtec, evaluated performance under realistic conditions including crosstalk, jitter, noise, and ratio level mismatch (RLM). These results inform the specification process by showing how PAM4 and PAM6 modulationschemes perform across a range of physical configurations, ensuring that reach and margin targets are grounded in actual channel behavior.

Design Complexity in 448G Electrical PHYs

Implementing a 448G PHY in SerDes form involves overcoming significant design challenges. At such high data rates, unit intervals are extremely short, requiring precise timing recovery, advanced feed-forward and decision-feedback equalization, and high-resolution ADC/DAC operation. Moving from PAM4 to PAM6, for instance, increases the number of symbol transitions from 16 to 36, the number of comparators in an unrolled DFE from 16 to 36, and the detector bit width from 2 bits to 3 bits, all of which demand higher precision and potentially higher power. These realities must be considered alongside the modulation choice, packaging strategy, and thermal constraints. No clear implementation advantage for either approach has been identified as of date.

Synopsys’ Role in Accelerating 448G Development

Synopsys is contributing to accelerated 448G development by engaging across all major forums—IEEE, OIF, UEC, SNIA, and OCP—ensuring that data, insights, and perspectives are shared early and often. The company’s early studies on channel topologies, combined with in-depth analysis of modulation trade-offs and DSP architecture complexity, give standards bodies a concrete basis for narrowing options quickly.

By promoting a phased approach that will help deliver the right PHYs and interfaces “just in time”, Synopsys helps ensure that high-priority deployments can begin without waiting for the complete standards suite to mature. This strategy, combined with technical modeling, real-world measurements, and active ecosystem engagement, positions Synopsys as a key enabler of the industry’s transition to 448G Electrical PHYs.

Also Read:

Synopsys Webinar – Enabling Multi-Die Design with Intel

cHBM for AI: Capabilities, Challenges, and Opportunities

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh


Should the US Government Invest in Intel?

Should the US Government Invest in Intel?
by Daniel Nenni on 08-18-2025 at 10:00 am

Trump Intel Foundry Hat Smiling SemiWiki

“Most companies don’t die because they are wrong; most die because they don’t commit themselves. They fritter away their valuable resources while attempting to make a decision. The greatest danger is standing still.” Andy Grove’s Only the Paranoid Survive, first published in 1996.

Looking back 20 years, we all know this is true for Intel and many other companies. Personally, I feel that the lack of competition was Intel’s undoing. AMD barely survived on the NOT INTEL market for a long time before finally presenting a competitive threat, which is where we are at today.

There are a lot of conspiracy theories floating around. The media seems to be in a death spiral over Intel which has caused a surge of interest on SemiWiki. We are happy to deploy additional cloud resources to keep up with the surge in traffic as it is a worthwhile cause. Intel is a national treasure that must be protected, absolutely.

It would be difficult to list all of the semiconductor manufacturing innovations that came out of Intel over the last 50 years as it is lengthy. Suffice to say Intel is THE most innovative company in the history of semiconductors. The latest innovation is Back Side Power Delivery (BSPD) which Intel has today at 18A. Other foundries will follow in the years to come but Intel is the leader in BSPD and I’m excited to see what is next.

Just to recap recent events:

Lip-Bu Tan took over as Intel CEO on March 18th of 2025. Four months later Lip-Bu has reshaped Intel to rid them of the analysis paralysis that has built up over the years.

On August 6th Senator Tom Cotton sent a letter to Intel Chairman Frank Yeary questioning Lip-Bu Tan’s investments in China:

Mr. Tan reportedly controls dozens of Chinese companies and has a stake in hundreds of Chinese advanced-manufacturing and chip firms. At least eight of these companies reportedly have ties to the Chinese People’s Liberation Army.”

This is not true of course. Senator Tom Cotton is up for re-election next year and has a reputation for being a China Hawk so this is US politics at its worst.

On August 7th Donald Trump (POTUS) responded by posting on Truth Social that “The CEO of Intel is highly CONFLICTED and must resign, immediately. There is no other solution to this problem.”

After the media immediately parroted calls for Lip-BU to resign, Lip-Bu Tan responded the with a letter to Intel employees clearly stating where his loyalties lie. This is one of the best written documents I have seen released from Intel. It is 100% factual with zero political or marketing spins.

On the following Monday (August 11th) Lip-Bu flew to Washington DC and met with POTUS to present the Intel value proposition for the United States of America as a US based leading edge semiconductor manufacturer. The meeting has been viewed as highly positive after POTUS followed up his first post about Lip-Bu Tan with, “His success and rise is an amazing story. Mr. Tan and my Cabinet members are going to spend time together, and bring suggestions to me during the next week.”

To me this supports my optimism in regards to Lip-Bu Tan’s leadership and making Intel relevant again.

The question we all have is how will this move forward? Will the USG invest in Intel and what would that look like? Even more importantly, was all of this planned in advance?

The media has said it was in fact planned in advance so political insiders could short Intel stock before the Tom Cotton letter came out to make a quick dollar. I highly doubt that since stock malfeasance is a felony but so is defamation so someone should be held accountable here.

Bottom line: The United States should invest in outcomes. The outcome being a competitive US based leading edge semiconductor manufacturer. The United States should be able to do this without defaming legendary CEOs and iconic companies like Intel but so be it. Hopefully this time around the ends will justify the highly politicized means.

Also Read:

Should Intel be Split in Half?

Making Intel Great Again!

Why I Think Intel 3.0 Will Succeed


PDF Solutions and the Value of Fearless Creativity

PDF Solutions and the Value of Fearless Creativity
by Mike Gianfagna on 08-18-2025 at 6:00 am

PDF Solutions and the Value of Fearless Creativity

PDF Solutions has been around for over 30 years. The company began with a focus on chip manufacturing and yield. Since the beginning, PDF Solutions anticipated many shifts in the semiconductor industry and has expanded its impact with enhanced data analytics and AI. Today, the company’s impact is felt from design to manufacturing, to the entire product lifecycle across the semiconductor supply chain as shown in the graphic above.

I did a bit of digging to understand the history and impact of this unique company. Along the way, I found a short but informative video (a link is coming). One of the speakers is a high-profile individual that talked about PDF Solutions and the value of fearless creativity. That quote did a great job to set the tone for what follows.

A Memorable Conversation

Dr. Christophe Begue

To begin my quest, I was able to spend some time talking with Christophe Begue, VP of corporate strategic marketing at PDF. Besides PDF, Dr. Begue has a long history of technology and business development at companies such as Oracle, IBM, and Philips Electronics. As Christophe began describing some of the innovations at PDF, he discussed the key role that the PDF data platform plays in the industry. In the cloud, PDF manages over 2.5PB of data, the equivalent of 3 million hours of video. Many customers deploy the PDF database on site which allows them to centrally manage all their massive manufacturing data.

He went on to explain that PDF has always been focused on anticipating the next challenge the semiconductor industry will face to be ahead of the problem. Current challenges include dealing with innovations in 3D, operating through a complex global supply chain, and figuring out how to leverage AI at all levels, from design to manufacturing. He described how PDF’s capabilities are at the intersection of these challenges with a common platform and database to improve collaboration and scale AI to drive operational efficiency.

Christophe went on to describe PDF’s three solution areas that all leverage a common data infrastructure powered by AI. The platform contains several significant capabilities that impact the entire semiconductor industry. This includes systems to help with characterization, including a unique contactless probing system developed by PDF. More on that in a moment.  There are also technologies to help optimize manufacturing and others to integrate and coordinate the worldwide supply chain. The diagram below summarizes how these systems leverage a common data infrastructure powered by AI to impact the entire semiconductor industry.

Christophe also described the broad impact PDF has on the entire semiconductor supply chain. The substantial design and manufacturing challenges the industry is facing can only be addressed with broad-based collaboration. There are many parts of the puzzle and they all must fit together optimally to make progress. PDF Solutions is a significant catalyst for this broad-based collaboration. PDF believes that doing this can only be done through the use of an open platform. With that open platform PDF is able to integrate with other solution providers used across the semiconductor industry. He shared the graphic below to illustrate the breadth of PDF’s impact and partnerships.

A Short but Potent Video

Dr. Dennis Ciplickas

This is the video that inspired the title of this post. It came from a discussion with Dr. Dennis Ciplickas, who spent 25 years at PDF Solutions. He said, “The time I spent at PDF fundamentally changed how I do my job…Being creative and giving yourself the freedom to be creative, even if you don’t really understand everything about the domain can lead to things you didn’t expect. Working at PDF we were always pushing the boundaries…and that led to insights…it’s the value of fearless creativity.” Dennis is now the technical lead manager for silicon product development at one of the leading cloud hyperscalers.

This video also describes the invention by PDF of eProbe, a contactless electrical test system that scans an entire wafer to identify and analyze hot spots. This is one example of how PDF pushes boundaries and improves the quality of semiconductor devices. There are many more such examples.

To Learn More

Dr. John Kibarian

I’ve provided just an overview of how PDF Solutions is changing the semiconductor ecosystem. There is so much more to the story. If advanced design is giving you a headache, you need to know about PDF Solutions. The company can help. You can listen to an informative podcast with Dr. John Kibarian, president, CEO and co-founder of PDF Solutions on SemiWiki here. There’s also a great interview with John on Investment Reports here.

There are a couple of excellent blog posts available on PDF’s website as well:

Secure Data Collaboration in the Semiconductor Industry: Unlocking Innovation Through AI and Connectivity

Perspectives on PDF Solutions Performance in 2024 and Path Forward

And of course, that excellent video I mentioned can be accessed here. And that’s an introduction to PDF Solutions and the value of fearless creativity.