100X800 Banner (1)

CDC Verification for Safety-Critical Designs – What You Need to Know

CDC Verification for Safety-Critical Designs – What You Need to Know
by Mike Gianfagna on 11-13-2025 at 6:00 am

CDC Verification for Safety Critical Designs – What You Need to Know

Verification is always a top priority for any chip project. Re-spins result in lost time-to-market and significant cost overruns. Chip bugs that make it to the field present another level of lost revenue, lost brand confidence and potential costly litigation. If the design is part of the avionics or control for an aircraft, the stakes go up, way up. There are substantial rules and guidelines to be adhered to for this class of design. And some of those rules have evolved over decades, making interpretation and adherence challenging.

A recent white paper from Siemens Digital Industries Software examines this class of design for a particularly vexing design bug – clock domain crossing (CDC) issues and resultant metastability. The white paper does a great job explaining the subtleties of CDC bugs and how to address those issues against the rigors of safety-critical rules for airborne systems. If your chip is destined for airborne use, this white paper is must-read. A link is coming, but first I’d like to provide an overview of CDC verification for safety-critical designs – what you need to know.

CDC Challenges

The white paper does a great job explaining how CDC bugs can cause problems with a chip design, and in particular intermittent problems. In safety-critical applications, an intermittent problem can be difficult to find, and bugs that make it to silicon can result in catastrophic consequences.

To summarize the issue, we need to examine the impact of metastability on a design. This term refers to what happens in digital circuits when clock and data inputs of a flip-flop change at approximately the same time. When this occurs, the flip-flop output can oscillate and settle to a random value. This metastability will lead to incorrect design functionality such as data loss or data corruption on CDC paths. The more asynchronous clock domains there are in a design, the worse the problem can become. And in today’s highly integrated and concurrent designs, the number of independent clock domains in a typical device is growing.

The white paper presents many examples that illustrate the types of problems to look for and how to correct them. It points out that this is a serious problem in safety-critical designs in that it frequently causes chips to exhibit intermittent failures. These failures generally go undetected during simulation (which tests a chip’s logic functions) and static timing (which tests for timing – within a single clock domain).

The paper goes on to explain that a typical verification methodology simply does not consider potential bugs from clock-domain crossing paths. Thus, if CDC paths are not explicitly verified, CDC bugs are typically identified in the actual hardware device in the field, a very bad outcome for safety-critical designs.

Design Assurance – the Good and the Bad

Another important part of this story are the guidelines that must be adhered to when sourcing safety-critical airborne devices. The white paper describes document RTCA/DO-254 “Design Assurance Guidance for Airborne Electronic Hardware” in detail. This specification is used by the Federal Aviation Administration (FAA), European Union Aviation Safety Agency (EASA), and other aviation authorities to ensure that complex electronic hardware used in avionics works reliably as specified, avoiding faulty operation and potential air disasters.

This goal is clearly important. One of the challenges of implementing a methodology to achieve the goal is the size and scope of the DO-254 spec. The FAA began enforcing it in 2005. The document is modeled after earlier specifications for certifying software, which were originally published over 45 years ago.

So, there is a lot of information in this document, both old and new. All in-flight hardware (FPGA or ASIC designs) must now comply with DO-254, and correct interpretation of the requirements and implementation in a production design flow presents challenges.

Digging deeper, the white paper explains that DO-254 projects assign a design assurance level (DAL) of A through E. The level corresponds to the criticality of a resulting failure. A failure in a level A design would result in catastrophic conditions (such as the plane crashing), while a failure in a level E design might mean that some passengers could be subject to minor inconvenience. Level A (catastrophic) and level B (hazardous/severe/major) projects must not only follow DO-254 processes but must also address additional safety concerns.

How to Automate CDC Verification

The white paper then presents a detailed overview of how to build a methodology that will conform to DO-254 requirements and deliver reliable, safe chips. It is explained that a comprehensive CDC verification solution must do four distinct things:

  1. Perform a structural analysis
  2. Verify transfer protocols
  3. Globally check for reconvergence
  4. Implement netlist glitch analysis

Details of these tasks are presented, as well as some of the unique capabilities of the Siemens Questa CDC solution. The white paper explains that many companies have recognized the benefits of Questa CDC and have adopted it as an added design assurance strategy as part of their verification arsenal. Specific details are presented for several real commercial implementations using Quest CDC. These examples cover many diverse projects:

  • S.-based storage/networking company
  • Large global computer company
  • Large Japanese consumer products company
  • S.-based wireless communications provider
  • Maker of military space systems
  • Large aerospace technology company
  • Defense and aerospace systems supplier

The white paper goes on to explain one of the key aspects of the DO-254 process is to deter- mine that the tools used to create and verify designs are working properly. The process to ensure this is called “tool assessment.”

There are many dimensions to this process, and substantial details about how to achieve a successful tool assessment are presented. The diagram below provides an overall flow of the process.

Design and verification tool assessment and qualification flow diagram

A tool vendor cannot assess or qualify their own tools, and the FAA does not provide blanket approval for use of any tools in DO-254 projects

This white paper does provide valuable details and suggestions for getting through the assessment process for Questa CDC as easily as possible.

To Learn More

If you’re involved in the development of safety-critical electronics this white paper provides substantial value regarding how to minimize CDC risks and how to build a compliant design flow.

The information presented is detailed, clear and actionable. And there is an Appendix with many additional and useful references. You can get your copy of Automating Clock-Domain Crossing Verification for DO-254 (and Other Safety-Critical) Designs here. 

You can also learn more about Questa CDC here. And that’s CDC verification for safety-critical designs – what you need to know.

Also Read:

A Compelling Differentiator in OEM Product Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Visualizing hidden parasitic effects in advanced IC design 


Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots
by Daniel Nenni on 11-12-2025 at 10:00 am

Ceva WiFi 7 1x1 Client IP

In the rapidly evolving landscape of connected devices, where artificial intelligence meets the physical world, Ceva  has unveiled a groundbreaking solution: the Ceva-Waves Wi-Fi 7 1×1 client IP. Announced on October 21, 2025, this IP core is designed to supercharge AI-enabled IoT devices and pioneering physical AI systems, enabling them to sense, interpret, and act with unprecedented responsiveness. As IoT ecosystems expand, projected to encompass over 30 billion devices by 2030, reliable, low-latency connectivity becomes paramount. Ceva’s innovation addresses this need head-on, leveraging the IEEE 802.11be standard to deliver ultra-high performance in compact, power-constrained form factors.

At its core, the Ceva-Waves Wi-Fi 7 1×1 client IP is a turnkey solution tailored for client-side applications, such as wearables, smart home gadgets, security cameras, and industrial sensors. Unlike bulkier access point implementations, this 1×1 configuration (one spatial stream for transmit and receive) optimizes for cost-sensitive, battery-powered designs, making it ideal for mass-market adoption. Key Wi-Fi 7 features baked into the IP include Multi-Link Operation, which allows simultaneous data transmission across multiple frequency bands (2.4 GHz, 5 GHz, and 6 GHz) for seamless aggregation and reduced interference; 4096-QAM modulation for 20% higher throughput than Wi-Fi 6; and enhanced puncturing to dodge congested channels dynamically. These capabilities slash latency to sub-millisecond levels, boost peak speeds beyond 5 Gbps, and enhance reliability in dense environments, crucial for real-time applications like augmented reality glasses or autonomous drones.

What sets this IP apart is its synergy with Ceva’s broader Smart Edge portfolio, particularly the NeuPro family of NPUs. When paired with Wi-Fi 7 connectivity, these NPUs empower devices to process sensor data, run inference models, and make decisions locally at the edge. This on-device intelligence minimizes cloud dependency, fortifying data privacy by keeping sensitive information, like health metrics from a fitness tracker, off remote servers. It also extends battery life by up to 30% through efficient power management and reduces operational costs by curbing data transmission volumes. In essence, Ceva’s solution transforms passive IoT nodes into proactive physical AI agents that perceive their surroundings (via cameras or microphones), reason through AI algorithms, and act autonomously, whether adjusting a smart thermostat based on occupancy or alerting factory workers to hazards.

Tal Shalev, Vice President and General Manager of Ceva’s Wireless IoT Business Unit, emphasized the strategic timing: “Wi-Fi 7’s breakthroughs in speed, resilience, and latency are driving rapid adoption. Our turnkey solution helps customers cut complexity and time-to-market delivering smarter, more responsive IoT experiences powered by edge intelligence.” Already licensed by multiple leading semiconductor firms, the IP has seen swift uptake, underscoring its market readiness. Industry analysts echo this enthusiasm; Andrew Zignani, Senior Research Director at ABI Research, notes, “Wi-Fi 7 is set to transform IoT by enabling the low-latency, high-throughput connectivity required for real-time edge intelligence and Physical AI. Solutions like Ceva’s are critical to bringing these capabilities into cost-sensitive, battery-powered devices.”

The implications ripple across sectors. In consumer wearables, imagine earbuds that not only stream audio but also perform real-time voice-to-text translation without lag. Smart homes could orchestrate ecosystems where lights, locks, and appliances collaborate via mesh networks, anticipating user needs through predictive AI. Industrial IoT benefits from resilient links in harsh environments, enabling predictive maintenance that prevents downtime. For emerging physical AI—think robotic companions or self-navigating vacuums—Wi-Fi 7 provides the deterministic backbone for multi-device orchestration, fostering collaborative intelligence akin to a “swarm” of sensors.

B0ttom Line: Ceva’s move positions it as a linchpin in the Wi-Fi 7 rollout, with over 60 licensees already harnessing the CEVA-Waves family for diverse applications. As edge computing surges, this IP doesn’t just connect devices; it imbues them with agency, paving the way for a future where AI seamlessly bridges digital and physical realms. By democratizing advanced connectivity, Ceva accelerates innovation, ensuring that smarter, more intuitive experiences are accessible to all.

Contact CEVA

Also Read:

A Remote Touchscreen-like Control Experience for TVs and More

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU
by Daniel Nenni on 11-12-2025 at 8:00 am

SemiDynamics Cervell NPU

In the fast-paced world of AI development, bridging the gap from trained models to production-ready applications can feel like an eternity. Enter Semidynamics’ newly launched Inferencing Tools, a game-changing software suite designed to slash deployment times on the company’s Cervell RISC-V Neural Processing Unit. Announced on October 22, 2025, these tools promise to transform prototypes into robust products in hours, not weeks, by leveraging seamless ONNX Runtime integration and a library of production-grade samples.

Semidynamics, a European leader in RISC-V IP cores, has built its reputation on high-performance, open-source hardware tailored for machine learning. The Cervell NPU exemplifies this ethos: an all-in-one RISC-V architecture fusing CPU, vector, and tensor processing for zero-latency AI workloads. Configurable from 8 to 256 TOPS at INT4 precision and up to 2GHz clock speeds, Cervell scales effortlessly for edge devices, datacenters, and everything in between. Its fully programmable design eliminates vendor lock-in, supporting large language models, deep learning, and high-performance computing with standard RISC-V AI extensions. Whether powering on-device assistants or cloud-scale vision pipelines, Cervell’s efficiency stems from its unified instruction stream, enabling infinite customization without fragmented toolchains.

At the heart of the Inferencing Tools is a high-level library layered atop Semidynamics’ ONNX Runtime Execution Provider for Cervell. Developers no longer wrestle with model conversions or low-level kernel tweaks. Instead, they point to an ONNX file, sourced from repositories like Hugging Face or the ONNX Model Zoo, select a configuration, and launch inference directly on Cervell hardware. Clean APIs handle session setup, tensor management, and orchestration, stripping away boilerplate code and minimizing integration risks. This abstraction sits comfortably above the Aliado SDK, Semidynamics’ kernel-level library for peak performance tuning, offering two lanes: rapid prototyping via the Tools or fine-grained optimization via Aliado.

ONNX Runtime integration is the secret sauce. As an open-standard format, ONNX ensures compatibility across ecosystems, and Semidynamics’ Execution Provider plugs it into Cervell’s vector and tensor units via the Aliado Kernel Library. The result? Plug-and-play execution for thousands of pre-trained models, with validated performance across diverse topologies. No more custom wrappers or compatibility headaches—developers focus on application logic, not plumbing.

To supercharge adoption, Semidynamics includes production-grade samples that serve as blueprints for real-world apps. For LLMs, expect ready-to-run chatbots using Llama or Qwen models, complete with session handling and response generation. Vision enthusiasts get YOLO-based object detection pipelines for real-time analysis, while image classifiers draw from ResNet, MobileNet, and AlexNet for tasks like medical imaging or autonomous navigation. These aren’t toy demos; they’re hardened for scale, with built-in error handling and optimization hooks.

The benefits ripple outward. “Developers want results,” notes Pedro Almada, Semidynamics’ lead software developer. “With the Inferencing Tools, you’re running on Cervell, prototype in hours, then harden for production.” Teams report shorter cycles, predictable latency, and maintainable codebases, ideal for embedding AI in agents, assistants, or edge pipelines. Complementing this is the Aliado Quantization Recommender, a sensitivity-aware tool that scans ONNX models for optimal bit-widths (INT4 to INT2), balancing accuracy and bandwidth without exhaustive trials.

Bottom line: In an era where AI deployment lags innovation, Semidynamics’ Inferencing Tools democratize Cervell’s power. By fusing open hardware with streamlined software, they accelerate the journey from lab to launch, empowering developers to ship smarter, faster products. As RISC-V gains traction in AI, expect this suite to redefine edge inferencing—open, scalable, and unapologetically efficient.

Also Read:

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?

Semidynamics adds NoC partner and ONNX for RISC-V AI applications


Adding Expertise to GenAI: An Insightful Study on Fine-Tuning

Adding Expertise to GenAI: An Insightful Study on Fine-Tuning
by Bernard Murphy on 11-12-2025 at 6:00 am

AI Model Tuner

I wrote earlier about how deep expertise, say for high-quality RTL design or verification, must be extracted from in-house know-how and datasets. In general, such methods start with one of many possible pre-trained models (GPT, Llama, Gemini, etc.). To this consultants or in-house teams add fine-tuning training, initially through supervised fine-tuning (SFT), refined through reinforcement learning with human feedback (RLHF) and subsequently enhanced/maintained through iterative refinement. ChatGPT claims this is the dominant flow (I incline to thinking them pretty accurate in their own domain). Supervision is through labeling (question/answer pairs). In most cases relying on human labeling alone is too expensive, so we must learn how to automate this step.

 A nice example of SFT from Microsoft

This Microsoft paper studies two different methods to fine-tune a pre-trained model (GPT-4), adding expertise on recent sporting events. The emphasis in this paper is on the SFT step rather than following steps. Before you stop reading because this isn’t directly relevant to your interests, I can find no industry-authored papers on fine-tuning for EDA. I know from a comment at a recent conference that Microsoft hardware groups are labeling design data, so I suspect topics like this may be a safe proxy for publishing research in areas relevant to internal proprietary work.

Given the topic tested in the study, the authors chose to fine-tune with data sources (here Wikipedia articles) added after the training cutoff for the pre-trained model, in this case September 2021. They looked at two approaches to fine-tuning on this corpus, one token-based and one fact-based.

The token-based method for label generation is very simple and mirrors the standard practice for generation per the paper. Here they seed with a manually generated label per the article overview section and prompt to generate a bounded set of labels from the article. The second method (which they call fact-based) is similar except that it prompts the model to break down complex sentences if needed into multiple atomic sentences. The authors also allowed for some filtering in this case to remove facts irrelevant to the purpose of the study. Here also the model was asked to generate multiple unique labels.

The paper describes training trials, run in each case on the full set of generated labels, also subsets to gauge sensitivity to training sample size. Answers are validated using the same model running a test prompt (like a test for a student) allowing only pass/fail responses.

The authors compare accuracy of results across a variety of categories against results from the untuned pre-trained model, their range of scaled fine-tuned options, and against RAG over the same sections used in fine-tuning but based on Azure OpenAI hybrid search. They conclude that while token-based training does increase accuracy over the untrained model, it is not as uniform in coverage as fact-based training.

Overall they find that SFT significantly improves performance over the base pre-trained model within the domain of the added training. In this study RAG outperforms both methods but they get close to RAG performance with SFT.

I don’t find these conclusions entirely surprising. Breaking down complex sentences into individual labels feels like it should increase coverage versus learning from more complex sentences. And neither method should be quite as good as vector-based search (more global similarity measures) which could catch inferences that might span multiple statements.

Caveats and takeaway

Fine-tuning is clearly still a very dynamic field, judging by recommended papers from Deep Research in Gemini and ChatGPT, complemented by my own traditional research (Google Scholar for example, where I found this paper). There is discussion of synthetic labeling, though concerns that this method can lead to significant errors without detailed human review.

One paper discusses how adding a relatively small set (1000) of carefully considered human-generated labels can be much more effective for performance than large quantities of unlabeled or perhaps synthetically labeled training data.

There is also concern that under some circumstances fine-tuning could break capabilities in the pre-trained model (this is known as catastrophic forgetting).

My takeaway is that it is possible to enhance a pre-trained model against training data and modest training prompts and get significantly better response accuracy than the pre-trained model alone could provide. However expert review is important to build confidence in the enhanced model and it is clear that 100% model accuracy is still an aspirational goal.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI RTL Generation versus AI RTL Verification


EDA Has a Value Capture Problem — An Outsider’s View

EDA Has a Value Capture Problem — An Outsider’s View
by Admin on 11-11-2025 at 10:00 am

Figure1 (1)

By Liyue Yan (lyan1@bu.edu)

Fact 1: In the Computer History Museum, how many artifacts are about Electronic Design Automation (EDA)? Zero.

Fact 2: The average starting base salary for a software engineer at Netflix is $219K, and that number is $125K for Cadence; the starting base salary for a hardware engineer at Cadence is $119K (source: levels.fyi)

Fact 3: EDA industry revenue has been 2% of semiconductor industry revenue for over 25 years, and only recently climbed to 3%.

Part 1: Overview 
Starting Point

I started my inquiry on EDA’s value capture issue as a research project, puzzled by the fact that this critical technology has historically captured only 2% of the revenue generated by the semiconductor industry. As an outsider, it was not obvious whether this proportion is reasonable or not. While the question of how exactly EDA is undervalued or overvalued (and under-charging or over-charging) warrants a whole other discussion, it is not surprising that EDA folks are wanting a larger share. When I was strolling around conferences pitching this 2% number to engage people in my research, the responses converged:

When asked, employees from the bigger vendors, particularly senior engineers, expressed strong opinions that the technologies are under-appreciated. They believe that the whole industry should get paid more and they should get paid more. “10%!” some said. Well, that might be too greedy. Some of the engineers questioned what the salespeople were doing exactly that the industry did not get higher revenue. In general, employees from bigger vendors can’t discuss details but requested me sharing my findings when they are complete. Because they want to know. And everybody wants to know, but nobody talks.

Smaller vendors (salespeople, engineers, founders, sometimes that one person playing all these roles), uniformly pointed the finger at the big players: “It is them! Those companies squeeze us by keeping their prices down. It is so unfair. Not only do we suffer, but the whole industry is also underpriced! You should ask them what they are doing!” The expression, the tone, and the structure of their blaming account are shockingly similar, as if they were rehearsed from a secret union whose sole purpose was to recount their suffering to each other. While small EDA companies have no problems sharing with me, they believe they are not so relevant as the industry-level revenue and profit are driven by the big three.

There is an advantage as an outsider. My ignorance provides me courage to throw any random question to any random person who is willing to catch it. When I threw my question at an executive panel, Joe Costello, former CEO of Cadence, responded along the lines of “Profit margin? I don’t think about it. I think about the value we provide through our products. I always believe that if we bring value to the customer, then we will get value.” Oh no.

Wally Rhines, former CEO of Mentor, gave a milder response consistent with what he wrote in his book, which is that he believes that EDA has a healthy long-term relationship with the customers: “I’m convinced that salespeople for the EDA industry work with their semiconductor customers to provide them with the software they need each year, even in times of semiconductor recessions, so that the average spending can stay within the budget” (Predicting Semiconductor Business Trends After Moore’s Law, Rhines, 2019). After a short conversation with Wally, I am convinced that he is a strong market believer, believing that we are close to a good equilibrium (and if not, the market can always sort itself out with any temporary deviation). Except that, first, there can be multiple equilibria, and we don’t know whether we are in a good one that sustains EDA’s innovation, and second, even if the invisible hand can work its magic bringing us to a better point, who knows how long it would take. In the long run, market will correct itself, but “in the long run we are all dead” (Keynes, 1923).

So, it seems that EDA people are thinking a lot about value creation but little about value capture, expecting customers to automatically appreciate their products through paying a fair amount. It turns out I wasn’t imagining this. Charles Shi, senior analyst at Needham, pointed out at DAC 2024 that many EDA folks believe that if they create value, then they will automatically get value, and unfortunately, that is not true. So, I set the goal to understand why EDA got a constant 2% share of semiconductor revenue and how this number was reached. We know in theory this outcome is due to a combination of macro-level market structure and micro-level firm practices, but I would like to know what those practices are and how they contribute to value capture. More specifically, I want to know how firms are selling to maximize their gain. The next question, then, is whom to talk to and what to ask, which is essentially the entire process. The list includes salespeople, engineers, CAD teams, and buyers.

Jumping to the end—a preview of findings

After talking to more than a dozen decision makers, formally or informally, answers quickly emerged. Even if the current number of participants may not be enough to guarantee an academic paper and the project is ongoing, I thought it would be interesting to share some key findings. Most of them should not come as a surprise. Think of it as a mirror (perhaps a slightly distorted one) – from time to time we do need to look into one, though the observations there should not be terribly unexpected.

 

A quick preview of some key findings:

  1. Professional negotiation teams are often used by the buyers, but not by the vendors.
  2. Given the long contract terms, vendors cannot shop customers easily. That means, at contract renewal times, customers have the leverage of alternatives, whereas vendors do not.
  3. Negotiations are almost always done by the end of fiscal year or fiscal quarter, even for private vendors. (Buyers insist this is always the case, while some vendors deny it.)
  4. Heavy quota pressure for sales potentially contributes to heavy discounts.
  5. Customers are only willing to pay a small fraction to small vendors and startups for products that are similar to or even better than those of big vendors.
  6. Bundling practice erodes monopoly rents, giving away value from the better products.
  7. A small number of large customers often account for the majority of a vendor’s revenue.

The above points are mostly around contracting and negotiating, with some around structures and others around practice, and none are in favor of EDA vendors. Then there are some more positive factors:

  1. Historically sticky users.
  2. Low competition pressure from startups.
  3. Customers’ product search is only triggered by pain, never by price/cost.

The seemingly good news for EDA, of system companies as new customers, may not be too good after all. We tend to think that they have bigger budgets and more generous, therefore increasing EDA’s profit margin. Well, they also have shorter contracts in general, training younger engineers who are more comfortable with switching tools. This increases their bargaining power. Additionally, less experienced users require more technical support, increasing costs for the vendors.

In all, industry structure, business model, and contracting practice combine as driving factors for the value capture, which I unpack in the next part.

Part 2 What I Have Learned

What business is Electronic Design Automation (EDA) in?

One can categorize a business through various lenses. To outsiders, one can explain that EDA is in the software industry. To someone who is interested in technologies, one can say EDA is in the semiconductor industry. If I were to explain it to business or economics researchers, I would say that EDA supplies process tools for chip design.

Why does it matter what business EDA is in? First, research studies suggest that stakeholders evaluate products and businesses through the lens of category, in plain words, putting them in a box first so they can be more easily understood and compared to similar offerings and players. If a business cannot be understood, i.e., put in a box, then it risks being devalued. And in this case, if investors and analysts do not know the proper reference group for EDA, then they would not cover it (or provide buy recommendations), and a mere “no coverage” from analysts can negatively affect stock market evaluation. At least that’s what existing studies say.

EDA is really one of its kind, and a tiny market. So, what reference group can an analyst use? What other stocks is the same analyst covering? CAD? The customers and downstream structure can vary a lot. Semiconductors? Yes, they are often covered by the same analyst, but the semiconductor industry is really not a good reference, except to the extent that their revenues are co-influenced by the downstream applications. Who wants to cover it? Nobody, unless you have an EE-related background. So what box do stock analysts put EDA into? The Black Box.

Willingness-to-pay is the second reason one should ask what business EDA is in. Value is in the eye of the beholder, and we should really ask how hardware designers feel about EDA.

The value of a business is often categorized into gain creators and pain relievers. EDA is certainly not the former, which is often driven by individual consumptions that chase temporary joys. People say businesses that leverage the seven deadly sins are the most profitable. Amen! Pride, envy, and lust – social media and cosmetics. Gluttony – fast food, alcohol. Sloth – delivery, gaming, sports watching, etc. Rest assure that EDA satisfies none of these criteria. EDA is not consumable. EDA is not perishable. And EDA is not going to make the user addicted.

So, EDA is more like a pain reliever. Well, if not careful, some engineers may even assert that it is in the “pain producing” business: “Ugly!” “Frustrating!” “Outdated.” “Stupid.” You can hear the pain. But when I pointed out that perhaps the alternative of no tools is more painful, there wasn’t much of an argument. One issue is, we don’t know the counterfactual enough to develop better appreciation. You never know what you have until it’s gone.

All of the above suggest, it is naturally difficult for EDA tools to price up, even without challenges in competition, business model and business practices, which we will dive into next.

But perhaps the discussion should first start with some positive notes. There are a few factors that work in favor of the big EDA vendors, which is 90% of the market:

Sticky users. IC designers are constantly under time pressure. Changing tools means adjustment and adjustment means loss of time. No one wants to change a routine unless it is absolutely necessary, which means EDA vendors can get by as long as they are not doing a terrible job.

Little competition from startups and small players. There were many startups that outdid incumbents and won market share in the past. That time has gone. A combination of lack of investors, increased complexity of problems, and fully exercised oligopoly power has led to a decline of EDA startups. Those few small or new players have been taking the scraps: providing solutions to peripheral problems, taking only a fraction of the price that a big vendor would get, or earning nothing at all if the big vendor decides to provide the competing tool for free in a bundle with other products.

Customers’ product search is only triggered by pain. Customers do not initiate their search for alternative tools just because the existing ones are expensive. That means, there isn’t much price competition pressure once you’ve got the account. But this also means, the pressure is on the initial pricing. Once you lock the customers in, as long as the pain is manageable, the accounts stay.

To the best of my understanding, EDA vendors are leveraging the above factors fairly well, especially factor No. 2, squeezing the small players (which is not without costs). However, there are even more factors that negatively affect the industry’s value capture, among which I rank the incentive design and pricing the top culprits.

Quota

Here goes a story:

“It was 2010, Sep 3rd, just one month away from Sep 30th, the fiscal year end of Company X. The corporate management team sets a goal to book $300 million for next year, and that translates to many sales quotas for its foot soldiers. John works in sales, and his job is to sell all types of tools, to either new or existing customers. John has a yearly quota of $2 mil. He has managed to book $1.4 mil so far, and he has a chance of booking a few new customers for $850k. But he missed the quota last quarter, and if he does not deliver this time, he will be let go. John has a mortgage with a $4k monthly payment and two kids, one is 3rd grade and the other just entering school. John’s wife has not been particularly happy about his recent working hours and heavy travels.

10 Days later, John closed a deal on Sep 13th, bringing his balance down to $300k from $600k. John expected to close the next new customer Company Elppa for $450 – $550k for 20 licenses. The negotiation went on for a week, and the customer stood firm at $350k, claiming that’s their budget. By Sep 21st, John was stressed and asked his manager if it is possible to lower the price to 350. The manager nodded yes to 400, as he was trying to meet his own quota. The eventual deal was $400k with promised additional support to the new customers. John was relieved. The customer was happy. The management was ok with this season’s performance. The licensing price for Elppa effectively dropped from the target $550k to $400k, by a percentage of 27%. Hopefully this gap can be closed in the future.

Two years later, John moved to Company Y. His accounts were left with Ron. Ron couldn’t find a way to increase the price by more than 5% with Elppa since Elppa was also trying to buy more licenses. Ron ended up charging a 5% price increase for 10 more licenses. The gap was never closed.”

This is absolutely a made-up story. Except that there was a time an employee must be let go if he has not met the quota for two consecutive quarters, and that customers do always want to negotiate by the end of a quarter to leverage this quota pressure, and that it is difficult to increase price for an established account. And the mortgage probably is four thousand a month. Rumor has it some tools are given for free in the bundle to attract clients, and the units for those tools have become unsustainable as a result. Rumor also has it there was once a 98% discount from the listing price at a desperate time (compared to the usual 60-80% discount rate). While the discontinuous incentive design, the quota system that are used by all vendors, can increase sales on some occasions, it can also point effort in the wrong direction, especially when the quota is always the total dollar amount.

The story depicts some issues that are essential to EDA’s value capture problem, given the current business model.

Negotiation

Most customers make multi-year orders. This type of contracts brings comfort to both sides. Customers can now focus on their work and develop routines, and vendors have the confidence that they will not starve in the next season. But this also means, customers’ budget is mostly locked with their existing contracts. In addition to limited customer turnover, EDA vendors can also expect few new clients. This puts vendors in a weak position in contract negotiation. With potential new customers locked into their existing deal, vendors have few alternatives equivalent to any existing customer at the time of negotiation. In contrast, customers can always switch vendors, despite the difficulty in executing a switch. This results in imbalanced negotiation power. This imbalance is particularly true when: (1) the customers have already been using competing tools; (2) the customers are young and adaptable; (3) the customers are big.

Customers negotiate with a few EDA vendors; vendors negotiate with hundreds of customers. In these repeated negotiations, big customers largely use professional negotiators who do nothing but negotiate contracts, whereas vendors have their street-smart people-orientated engineer-turned-sales. No matter how awesome these salespersons are, it is hard to argue there is no skill difference, not to mention the quota pressure. And yet, these are the only moments that any value created by EDA is cemented into revenue.

Bundling

Bundling is often considered a highly effective tool for price discrimination, able to maximize value capture by hitting each customer’s willingness-to-pay (WTP) for a set of products. It works because each customer has different levels of WTP for any specific product, but the variance is largely cancelled out when a whole bundle of products is offered together. The price for the bundle is usually fixed.

But bundling in EDA is nothing like the instrument used in typical pricing strategy, despite sharing the same name. In fact, there is no point in using the word “bundle” at all; a more accurate description would be “there is no fixed price, you can buy anything you want, we hope you try our other products, we will give you a discount when you buy more, and we just charge them all together.”

So how exactly this practice is harming EDA businesses?

One case is when the client wants to purchase a competing product of A, possibly superior, the sales may say, if you buy our B and C, we can give you A for free!

This sort of “bundle” is essentially bad pricing competition that squeezes out the small players, and big vendors themselves have to bear the consequences of low pricing for a long time due to locked-in contracts, as we discussed earlier.

The bigger vendors also did not develop mutual forbearance against each other to avoid competing on the same product features. Business school classrooms like to use the Coca-Cola vs. PepsiCo example through which students develop the understanding that both companies are so profitable largely because they do not compete on price, but on perceived differentiation through branding. It is possible to have different strengths instead of all striving for a complete flow.

The “bundle” practice also obscures the value of each tool. Instead of using A1 to compete with A2, a company can use A1, B1, C1 together to compete with any combination of competing tools. When you offer A1 for a low price, you are effectively eroding the profit from B1 or C1, even if perhaps one of them has monopoly power. As for how much value is given away, only these vendors can tell with their data.

Industry structure

One determining factor here is the industry structure of EDA and their downstream. Just the big three EDA firms account for 90% of market share. That market concentration should come with high market power. Well, if you look at the customers of any big vendors, the semiconductor companies, perhaps two or three can make up 70% of their revenue. Not so much power for the vendors after all.

Business model?

Many think the business model is the problem. I am not so sure. The common argument is that even though EDA provides critical tools to enable any downstream chip design and applications, the current business model does not allow it to get a fixed share of the final value created. Some others say, unless EDA can find a way like Silicon IP, charging fee per production unit, the business won’t be sustainable.

Let’s take a look at the underlying logic of these arguments. This is equivalent of saying, the university should charge students a fixed percentage of their future incomes; otherwise it is not fair, since the university degrees enable their careers. Or, power tool manufacturers should charge customers based on the value of house they build. Even better, the coffee you bought this morning woke you up and help you close a $2 million deal, so pay the coffee shop 1%. It does not make sense.

But this is how people are thinking, and the pricing logic for EDA vendors follows: We guess the customers’ revenue and use that to price discriminate; we believe we should charge customers with simple designs or low value application lower fees and increase the number when the design complexity and downstream revenue increase. Whether consciously or not, many vendors are on the same page in this pricing logic.

There are two parts to this logic. One line of reasoning is that once a tool is provided, it can be used for 100 hours, or 1000 hours, and of course they should charge more for the ones used more. This part seems somewhat reasonable because the vendor is essentially providing more tools for the heavy users, even though there is little or no additional cost incurred by the vendor. A solution is cloud and usage monitoring, which could be implemented with time.

The other argument is that some customers use the tool to produce 1 million chips whereas some others only produce 10 thousand. Shouldn’t one charge the former more per license, given that the tools enable them to achieve a higher revenue? I do not believe so. The tool should charge a customer up to the added value – in this case, how much cost it saves the customer compared to the alternatives, which also decides the customer’s maximum willingness-to-pay – and charge at least its own production costs (in this case, its own costs for maintenance). As for where exactly the price lands in this range, it depends on competition and negotiation, which are discussed above in the industry structure and negotiation sections.

So, what are the possible remedies that can improve EDA’s value capture? 

Part 3 Remedies

Three remedies are proposed:

  1. Better incentive design
  2. Smaller accounts (but more of them)
  3. Stay in different lanes

The first proposed remedy focuses on the incentive issues in negotiation, the second on negotiation power, and the third on the willingness-to-pay and negotiation power resulting from market structure.

Better incentive design

The only opportunity EDA has to capture its value is the moment a deal is made. Who makes the deal and how it is made determine two years of revenue from that contract. I am not an expert in contracting but it does not take an expert to see that the total-dollar-amount based quota distorts incentives.

Here is what big vendors could do without changing much of their existing business model:

Hire one or two experts in sales incentives design. Ideally, they have a PhD in Economics or Finance. They can do data work and some simple formal modeling. They have experience or at least appreciation for experimental and behavioral economics. They could be just out of graduate school, or currently working in Amazon dealing with pricing models that never leverage much of their real training. Currently, the big vendors employ hundreds of engineers with PhDs, but only hire a few staff with a BS or MS for performance analytics and pricing. No, let an Econ PhD work, and they will be worth more than 4 bachelor’s.

It is best to hire them directly instead of sourcing from consulting firms, as the sales performance data needs to be reviewed constantly, and incentive schemes may need to be adjusted. However, it would be reasonable to first evaluate whether there is a real need through economic consulting groups.

I imagine EDA firms could use the same type of people for pricing.

In any case, any incentive scheme should consider a quota not just based on the total dollar amount. Realized price per license needs to be incorporated into incentive formulas as well.

Smaller accounts

This suggestion is not about getting new and small contracts. It is changing the situation where two or three customers make up 70% of an EDA vendor’s revenue, so it has little negotiation power in each contract. Obviously, EDA cannot change the semiconductor industry structure, but it can change the concentration of its contracts. That is, breaking down bigger contracts to smaller ones so each one is not as critical. Essentially you are treating one customer as twenty customers. This is also aligned with EDA’s price discrimination strategy based on complexity and the total value created. Different projects of a customer can have different levels of complexity and produce at different scales.

What’s the benefit for customers when this can increase their contracting and negotiation costs, not to mention the vacancy time of each license (though we can expect that the pricing can eventually be based on actual run time)? Clean separation for budgets allocated to different products. Accountability at the project or business unit level.

Stay in different lanes

Stay in different lanes, or at least have separate strengths. Save the effort in working on one’s shortcomings and divert those resources into innovation around one’s unique strengths. This also extends to vendors’ recent diversification into other areas, such as IoTs and Automotives. With new realms open for innovation, this could be a time to a reset the mode of competition.

The above remedies are brief by design. They are not comprehensive solutions, but practical ideas meant to prompt a rethink of how EDA approaches value capture. EDA’s non-stop innovation is vital, but sustaining the field and keeping it attractive to talent requires taking value capture just as seriously.

Professor Liyue Yan is a researcher of strategy and entrepreneurship at Boston University’s Questrom School of Business. Her work examines strategic decision-making and entrepreneurial entry, with ongoing projects focusing on the Electronic Design Automation (EDA) industry.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

AI RTL Generation versus AI RTL Verification

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day


WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity
by Daniel Nenni on 11-11-2025 at 8:00 am

multistream webinar banner square

In the race to power ever-larger AI models, raw compute is only half the battle. The real challenge lies in moving massive datasets between processors, accelerators, and memory at speeds that keep up with trillion-parameter workloads. Synopsys tackles this head-on with its webinar, How PCIe Multistream Architecture is Enabling AI Connectivity at 64 GT/s and 128 GT/s, set for November 18, 2025, at 9:00 AM PST. This 60-minute session, led by Diwakar Kumaraswamy, a veteran SoC architect with over 15 years in high-speed interconnects, targets engineers and system designers building the next wave of AI infrastructure.

REGISTER NOW

At the heart of the discussion is PCIe Multistream Architecture, a fundamental redesign that breaks away from the single-stream limitations of earlier PCIe generations. In traditional PCIe, all data packets, whether from storage, networking, or GPU memory, share a single serialized path. This creates bottlenecks during bursty AI traffic, such as gradient updates in distributed training or real-time inference across multiple streams. Multistream changes the game by allowing multiple independent data flows to travel in parallel over the same physical link. Each stream gets its own error handling, ordering rules, and quality-of-service controls, dramatically improving throughput and reducing latency.

The webinar will contrast this with legacy designs and show how Multistream unlocks the full potential of PCIe 6.0 (64 GT/s) and PCIe 7.0 (128 GT/s). At 64 GT/s, a single x16 link delivers 256 GB/s bidirectional bandwidth, enough to feed an entire rack of GPUs without throttling. Double that to 128 GT/s in PCIe 7.0, and you’re looking at 512 GB/s per link, a leap that makes disaggregated AI architectures viable. Think GPU clusters spread across racks, NVMe storage pools serving petabytes to language models, or 800G Ethernet backhauls, all connected with microsecond-level coherence.

Diwakar will dive into the mechanics: how 1024-bit datapaths at 1 GHz clocks enable 2x performance across transfer sizes, how PAM4 signaling and FLIT-based error correction tame signal loss over long traces, and how adaptive equalization in the PHY layer keeps power in check. He’ll also cover practical integration, linking user logic to Multistream controllers, managing timing closure in complex SoCs, and verifying compliance with advanced test environments.

For AI system designers, the implications are profound. Multistream minimizes jitter in all-to-all GPU communication, accelerates model convergence, and supports secure multi-tenancy through per-stream encryption. It enables chip-to-chip links in SuperNICs, smart SSDs, and AI switches, all while cutting idle power by 20–30% through efficient low-power states. This isn’t just about speed—it’s about building scalable, sustainable, and secure AI platforms.

Bottom line: As PCIe 8.0 begins to take shape on the horizon, this webinar positions Multistream as the cornerstone of future AI connectivity. Whether you’re designing edge inference engines or exascale training clusters, understanding this architecture is no longer optional, it’s essential. The session promises not just theory, but actionable insights to future-proof your designs in an AI-driven world.

REGISTER NOW

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


A Six-Minute Journey to Secure Chip Design with Caspia

A Six-Minute Journey to Secure Chip Design with Caspia
by Mike Gianfagna on 11-11-2025 at 6:00 am

A Six Minute Journey to Secure Hardware Design with Caspia

Hardware-level chip security has become an important topic across the semiconductor ecosystem. Thanks to sophisticated AI-fueled attacks, the hardware root of trust and its firmware are now vulnerable. And unlike software security, an instantiated weakness cannot be patched. The implications of such vulnerabilities are vast and quite expensive to repair.  How all this happened, what is known about the resultant weaknesses, how AI fits into the picture, and how to add expert-level security hardening to your existing chip design flow are all big questions to ponder.

A coherent view of the big picture with a clear path to secure chip design has been hard to find, until now. Caspia Technologies recently released a series of three short videos that explain what’s happening to chip security, why it’s happening and what can be done about it. Links to these videos are coming. You can watch all of three of them in under six minutes and your investment in time will show you why more secure chip design is so important and how to achieve it. So, let’s take a six-minute journey to secure chip design with Caspia.

Chapter 1 – The Elephant in the Room: Weak Chip Security

The first video frames the problem from a big-picture point of view. Why the hardware root of trust has become vulnerable and what it means for chip design are explored. The graphic at the top of this post is from this first video. What we see here is the changing landscape of DevSecOps.

For many years this high-growth industry focused on software security. Code was analyzed, weaknesses were identified, and software updates were developed and tested to increase the robustness of the code. Underlying this entire segment was the assumption that the hardware root of trust was secure and immutable. And for a long time, this was true. Some of the forces that made hardware vulnerable to attack are discussed in this segment. The result is an emerging segment of DevSecOps that focuses on fortifying the security of the hardware.

This is the sole domain of Caspia Technologies.

Chapter 2 – Identify Security Threats Before They Harm You

The second video takes a deeper dive into the chip security problem. Specific examples of hardware weaknesses and the resultant impact are taken from recent headlines. The applications cited will be familiar to all. Security risks are closer to home than you may realize.

This video also showcases the substantial progress being made across the semiconductor ecosystem to understand these new security risks. Two examples of how government, industry, and academia collaborate to track and categorize security risks are presented. These efforts form the foundation for finding and fixing security risks. The diagram below illustrates some of the details discussed.

Collaborative Efforts to Track Security Risks

Caspia Technologies and its founding team at the University of Florida in Gainesville have been pioneering catalysts for this work. The details of these efforts and its impact are also touched on.

Chapter 3 – How GenAI Adds Expert Security to Existing Design Flows 

In this final installment, approaches to apply GenAI technology in novel ways to create breakthrough security verification are presented. This video explains how GenAI capabilities can be harnessed to deliver expert level security verification to existing design teams and flows. The graphic below summarizes some of the relevant qualities that pave the way to new approaches.

GenAI Enhanced Verification Breakthrough

The specific ways Caspia Technologies uses GenAI are detailed, with examples of how Caspia’s AI agents work together to ensure new chip designs are robust and secure against a growing threat profile.

This is the future of chip design.

To Learn More

If chip-level security is a concern (and it should be), I highly recommend investing six minutes to allow Caspia to show you the path to a more secure future. The insights are valuable and actionable.

Here is where you can view each chapter of the story:

Chapter 1

Chapter 2

Chapter 3

You can also find out more about Caspia and its impact on the industry on SemiWiki here.  And that’s a six-minute journey to secure chip design with Caspia.

Also Read:

Large Language Models: A New Frontier for SoC Security on DACtv

Caspia Focuses Security Requirements at DAC

CEO Interview with Richard Hegberg of Caspia Technologies


Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution
by Lauro Rizzatti on 11-10-2025 at 10:00 am

Lessons from the DeepChip wars Table

The competitive landscape of hardware-assisted verification (HAV) has evolved dramatically over the past decade. The strategic drivers that once defined the market have shifted in step with the rapidly changing dynamics of semiconductor design.

Design complexity has soared, with modern SoCs now integrating tens of billions of transistors, multiple dies, and an ever-expanding mix of IP blocks and communication protocols. The exponential growth of embedded software has further transformed verification, making early software validation and system-level testing on emulation and prototyping platforms essential to achieving time-to-market goals.

Meanwhile, extreme performance, power efficiency, reliability, and security/safety have emerged as central design imperatives across processors, GPUs, networking devices, and mobile applications. The rise of AI has pushed each of these parameters to new extremes, redefining what hardware-assisted verification must deliver to keep pace with next-generation semiconductor innovation.

Evolution of HAV Over the Past Decade

My mind goes back to the spirited debates that played out on the pages of DeepChip in the mid-2010s. At the time, I was consulting for Mentor Graphics, proudly waving the flag for the Veloce hardware-assisted verification (HAV) platforms. My face-to-face counterpart in those discussions was Frank Schirrmeister, then marketing manager at Cadence, standing his ground in defense of the Palladium line.

Revisiting those exchanges today underscores just how profoundly user expectations for HAV platforms have evolved. Three key aspects of deployment, once defining points of contention, have since flipped entirely: runtime versus compile time, multi-user operation, and DUT debugging. Let’s take a closer look at how each has transformed.

Compile Time versus Runtime: A Reversal of Priorities

A decade ago, shorter compile times were prized over faster runtime performance. The prevailing rationale, championed by Cadence in the processor-based HAV camp, was that rapid compilation improved engineering productivity by enabling more iterative RTL debug cycles per day. In contrast, the longer compile times typical of FPGA-based systems often negated their faster execution speeds, creating a significant workflow bottleneck.

Over the past decade, however, the dominant use case for high-end emulation has shifted dramatically. While iterative RTL debug remains relevant, today’s most demanding and value-critical tasks involve validating extremely long software workloads: booting full operating systems, running complete software stacks, executing complex application benchmarks, and, increasingly, deploying entire AI/ML models. These workloads no longer run for minutes or hours, instead they run for days or even weeks, completely inverting the equation and rendering compile time differences largely irrelevant.

This fundamental shift in usage has decisively tilted the value proposition for high-end applications toward high-performance, FPGA-based systems.

The Capacity Driver: From Many Small Jobs to One Long Run

Back in 2015, one of the central debates revolved around multi-user support and job granularity. Advocates of processor-based emulation systems argued that the best way to maximize the value of a large, expensive platform was to let as many engineers as possible run small, independent jobs in parallel. The key metric was system utilization: how many 10-million-gate blocks could be debugged simultaneously on a billion-gate system?

While the ability to run multiple smaller jobs remains valuable, the driver for large-capacity emulation has shifted entirely. The focus has moved from maximizing user parallelism to enabling a single, system-critical pre-silicon validation run. This change is fueled by the rise of monolithic AI accelerators and complex multi-die architectures that must be verified as cohesive systems.

Meeting this challenge demands new scaling technologies—such as high-speed, asynchronous interconnects between racks—that enable vendors to build ever-larger virtual emulation environments capable of hosting these massive designs.

The economic rationale has evolved as well: emulation is no longer justified merely by boosting daily engineering productivity, but by mitigating the catastrophic risk of a system-level bug escaping into silicon in a multi-billion-dollar project.

The Evolution of Debug: From Waveforms to Workloads

Historically, the quality of an emulator’s debug environment was defined by its ability to support waveform visibility of all internal nets. Processor-based systems excelled in this domain, offering native, simulation-like access to every signal in the design. FPGA-based systems, by contrast, were often criticized for the compromises they imposed, such as the performance and capacity overhead of inserting probes, and the need to recompile whenever those probes were relocated.

That paradigm has been fundamentally reshaped by the rise of software-centric workloads. For an engineer investigating why an operating system crashed after running for three days, dumping terabytes of low-level waveforms is not only impractical but largely irrelevant. Debug has moved up the abstraction stack—from tracing individual signals to observing entire systems. The emphasis today is on system-level visibility through software debuggers, protocol analyzers, and assertion-based verification, approaches that are less intrusive and far better suited to diagnosing the behavior of complex systems over billions of cycles.

At the same time, waveform capture technology in FPGA-based platforms has advanced dramatically. Modern instrumentation techniques have reduced traditional overhead from roughly 30% to as little as 5%, making deep signal visibility available when needed, without imposing a prohibitive cost.

Debug is no longer a monolithic task. It has become a multi-layered discipline where effectiveness depends on choosing the right level of visibility for the problem at hand.

In Summary

Recently, I had the chance to reconnect with Frank—now Marketing Director at Synopsys—who, a decade ago, was my counterpart in those spirited face-to-face debates on hardware-assisted verification. This time, however, the discussion took a very different tone. Both of us, now veterans of this ever-evolving field, found ourselves in full agreement on the dramatic metamorphosis of the semiconductor design landscape—and how it has redefined the architecture, capabilities, and deployment methodologies of HAV platforms.

What once divided the “processor-based” and “FPGA-based” camps has largely converged around a shared reality: design complexity, software dominance, and AI-driven workloads have reshaped the fundamental priorities of verification. The focus has shifted from compilation speed and multi-user utilization toward system-level validation, scalability, and long-run stability. Table I summarizes how the key attributes of HAV systems have evolved over the past decade.

Table: The Evolution of the HAV Main Attributes from ~2015 to ~2025

More importantly, the role of HAV itself has expanded far beyond its original purpose. Once considered a late-stage verification tool—used primarily for system validation and pre-silicon software bring-up—it has now become an indispensable pillar of the entire semiconductor design flow. Modern emulation platforms span nearly the full lifecycle: from early RTL verification and hardware/software co-design to complex system integration and even first-silicon debug.

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


Think Quantum Computing is Hype? Mastercard Begs to Disagree

Think Quantum Computing is Hype? Mastercard Begs to Disagree
by Bernard Murphy on 11-10-2025 at 6:00 am

Just got an opportunity to write a blog on PQShield, and I’m delighted for several reasons. Happy to work with a company based in Oxford and happy to work on a quantum computing-related topic, which you’ll find I will be getting into more deeply over coming months. (Need a little relief from a constant stream of AI topics.) Also important, I enjoy connecting technology to real world needs, things everyday people care about. Mastercard has something to say here.

Security is visceral when it comes to our money

I find in talking about and writing about security in tech that, while we all understand the importance of security, our understanding is primarily intellectual. Yes, security is important, but we don’t know how to monetize it. We do what we can, but we don’t need to get ahead of the game if end-user concern remains relatively low. As an end-user I share that view – until I’m hacked. As happened to my credit card a few weeks ago – I saw a charge at a Panda Express in San Francisco – a 3-hour drive from where I live. The card company removed the charge and sent me new card. Happily it’s been years since I was last hacked. But what if hacks could happen every year, or month, or week?

In their paper Mastercard talk about malicious actors using a “Harvest Now, Decrypt Later” attack paradigm. Building a giant pool of encrypted communications and keeping it in storage until strong enough decryption mechanisms become available. We’re not aware that data we care about is in the hands of bad actors because nothing bad has happened – yet. This is not a theoretical idea. The possibility already exists for systems using DES, RSA-1024 or weaker mechanisms, which is why most though maybe not all weak systems have been upgraded.

The stronger threat comes from quantum computing (QC). You might think that QCs are just toys. Small qubit counts can’t handle real jobs. Your view may be outdated. IBM already have a one-thousand usable qubit computer, Google is planning for a one-million qubit system and who knows what governments around the world can now reach, especially in hacking hotbeds.

OK you counter, but these are very specialized systems. Governments don’t want to hack my credit cards (though I’m not sure I’d trust that assertion). But it doesn’t matter. To build demand, QC centers provide free or moderate-cost access to their systems. All you have to do is download an algorithm, maybe from the dark Web, to factor large integers. Then you can break RSA-encrypted messages using Shor’s or similar algorithms.

In fairness, recent estimates suggest that RSA-2048 may not be broken before 2031. But improvements in quantum error correction are already pushing down that limit. We really can’t be certain when that barrier will be breached. Once it is, the flood gates will open thanks to all that harvested encrypted data. That breach will affect not only credit cards but all electronic payment systems and more generally finance systems. Our intellectual concern will very rapidly become a visceral concern if we are not prepared.

PQShield and quantum-resistant encryption

Mastercard mentions two major mechanisms to defend against quantum attacks: post-quantum cryptography (PQC) and quantum key distribution (QKD). QKD offers theoretically strong guarantees but is viewed currently as a future solution, not yet ready for mass deployment. The Mastercard paper reinforces this position, citing views from multiple defense agencies and the NSA. More immediate defenses are based on QKC, for which PQShield offers solutions today.

Several algorithms have been proposed which NIST is supporting with draft standards. Importantly, National Security System owners, operators and vendors will be required to replace legacy security mechanisms with CNSA 2.0 for encryption in classified and mission-critical networks. CNSA 2.0 defines suite of standards for encryption, hashing and other objectives.

The NIST transition plan projects urgency. New software, firmware and public-facing systems should be upgraded in 2025. Starting 2027 all new NSS acquisitions must be CNSA 2.0 compliant by default. By 2030 all deployed software and firmware must use CNSA 2.0 signatures and any networking equipment that cannot be upgraded with PQC must be phased out. The Mastercard paper talks about plans in other regions which seem not quite as far ahead, though I expect EU enthusiasm for tech regulation will quickly address that shortfall.

PQShield is already well established in PQC. This is a field where customer deals are unlikely to be announced, but other indicators are promising. Their PQCCryptoLib-Core is in “Implementation Under Test” testing at NIST. They are in the EU-funded Fortress project. They have partnered with Carahsoft Technologies to make quantum-safe technology available to US public sector companies. And they have published multiple research papers, so you can dig into their technology claims in detail.

Fascinating company. You can learn more HERE.

Also Read:

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move

Formal Verification: Why It Matters for Post-Quantum Cryptography

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey


CEO Interview with Roy Barnes of TPC

CEO Interview with Roy Barnes of TPC
by Daniel Nenni on 11-08-2025 at 10:00 am

Roy Barnes Headshot

Roy Barnes is the group president of The Partner Companies (TPC), a specialty manufacturer, overseeing TPC’s photochemical etching companies: Elcon, E-Fab, Fotofab, Microphoto and PEI.

Roy is an experienced leader known for building strong teams, driving change and inspiring people to succeed. Throughout his career, he has cultivated a leadership style grounded in open communication and collaboration—ensuring teams understand one another’s strengths and work together to achieve lasting results. Roy’s focus on connection and development has consistently helped TPC grow stronger and perform at its best.

Tell us about your company?

The Partner Companies began in 2010 with a straightforward idea: bring together specialty manufacturers who excel at solving precision manufacturing problems. Over the past fifteen years, we’ve built an integrated group of eleven companies that share something fundamental — each delivers mission-critical solutions to industries where failure isn’t an option.

What ties these companies together is deep expertise in specialized processes that can complement each other to best serve our partners’ manufacturing needs. The five companies I oversee, Elcon, E-Fab, Fotofab, Microphoto and PEI, have mastered photochemical etching for ultra-precise thin metal parts, ceramic metallization and more.

The semiconductor industry has been core to our business from the beginning. Elcon Precision, which dates back to 1967, was delivering high-precision photochemical etching solutions to semiconductor manufacturers more than forty years ago. E-Fab has been doing the same since 1982. That foundation has allowed us to grow alongside the industry as it’s evolved.

Today, TPC’s 11 specialty manufacturers operate across facilities in the United States, Asia, the United Kingdom, and Mexico. Companies like Elcon, E-Fab, Fotofab, Lattice Materials, Optiforms, and PEI each bring distinct technical capabilities, whether that’s crystal growth for precision optics, high-volume production of complex metal parts, sheet metal fabrication and assembly, or plastic injection molding.

What problems are you solving?

The semiconductor sector is undergoing rapid transformation, fueled by record-breaking sales and breakthroughs in AI, 5G, and autonomous technologies that are redefining how chips power modern life. To adapt to new technological developments, we use photochemical etching — precise manufacturing process that uses light-sensitive coatings and chemical etchants to remove metal and create detailed components – to make design changes and be intimate with the design change process. Once designs are validated, we are able to rapidly scale production volumes.

This type of collaborative problem-solving is representative of how TPC helps customers overcome complex engineering challenges, bringing both process expertise and measurable results.

For example, a leading semiconductor equipment manufacturer recently shared they faced intermittent reliability failures in an aluminum nitride (AlN) heater assembly and turned to Elcon, a TPC company, for support. Elcon conducted a comprehensive failure analysis of the AlN braze stack, including the substrate, metallization, and nickel layers. Cross-sectional and adhesion testing revealed that insufficient metallization adhesion — caused by low surface roughness — was the root issue.

Elcon then conducted a design of experiments (DOE) to define the optimal AlN surface roughness range, confirming a direct correlation between roughness and metallization integrity consistent with published adhesion mechanisms. Based on these findings, Elcon provided quantitative updates to the surface finish specification, resulting in a substantial improvement in adhesion reliability and overall heater performance. While niche, these are the kind of engineering challenges — and solutions — that help enable the next gen equipment to make next gen chips.

What application areas are your strongest?

Our primary focus is partnering with semiconductor equipment manufacturers, with special emphasis on innovative and scalable domestic solutions. By working closely with these companies, we help address challenges related to equipment design and production, enhancing their competitiveness and supply chain security.

What keeps your customers up at night?

Many of our customers face intense supply chain pressures, tariff impacts, rising costs from Asia and intellectual property concerns to ensure our designs avoid being replicated or stolen. TPC’s domestic manufacturing capabilities, engineering support and dedication to secure, IP-focused partnerships help mitigate these risks, offering our customers peace of mind.

What does the competitive landscape look like and how do you differentiate?

TPC stands apart in the rapidly evolving semiconductor industry through our integrated manufacturing approach, offering more than just photochemical capabilities. As demand for faster and reliable chips grows, our ability to leverage adjacent technologies enables us to provide customers with complete designs and vertical integrations. While most of our competitors are small, family-owned businesses, TPC’s frequent investments allow us to expand our engineering and scaling capabilities. Our diverse range of materials used – from copper, stainless steel to titanium and niobium — results in new manufacturing solutions such as titanium etching.

What new features/technology are you working on?

In the heart of Silicon Valley, where we have multiple facilities, we have implemented advanced process control techniques for semiconductor processes including statistical process control (SPC) charts, to bring greater consistency and quality to photochemical etching.

This brings semiconductor-grade precision to a traditionally less-regulated manufacturing environment, improving repeatability and yields for our customers. We also continually invest in new personnel and technologies to further enhance our offerings in specialty manufacturing, photochemical etching and assembly

How do customers normally engage with your company?

We value collaborative partnerships, helping customers develop their designs and solutions with the support of our engineering team. Our customers often come in with a print and our role is to guide them through the manufacturing process. The photochemical etching process in the semiconductor manufacturing is complex and precision-driven especially as U.S. companies work to domestic supply chains. TPC’s experts are here to help customers navigate it with speed, accuracy and reliability.

For more information and to get in touch with TPC, visit https://www.thepartnercos.com/.

Also Read:

CEO Interview with Wilfred Gomes of Mueon Corporation

CEO Interview with Rodrigo Jaramillo of Circuify Semiconductors

CEO Interview with Sanjive Agarwala of EuQlid Inc.