webinar banner2025 (1)

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots

Ceva Unleashes Wi-Fi 7 Pulse: Awakening Instant AI Brains in IoT and Physical Robots
by Daniel Nenni on 11-12-2025 at 10:00 am

Ceva WiFi 7 1x1 Client IP

In the rapidly evolving landscape of connected devices, where artificial intelligence meets the physical world, Ceva  has unveiled a groundbreaking solution: the Ceva-Waves Wi-Fi 7 1×1 client IP. Announced on October 21, 2025, this IP core is designed to supercharge AI-enabled IoT devices and pioneering physical AI systems, enabling them to sense, interpret, and act with unprecedented responsiveness. As IoT ecosystems expand, projected to encompass over 30 billion devices by 2030, reliable, low-latency connectivity becomes paramount. Ceva’s innovation addresses this need head-on, leveraging the IEEE 802.11be standard to deliver ultra-high performance in compact, power-constrained form factors.

At its core, the Ceva-Waves Wi-Fi 7 1×1 client IP is a turnkey solution tailored for client-side applications, such as wearables, smart home gadgets, security cameras, and industrial sensors. Unlike bulkier access point implementations, this 1×1 configuration (one spatial stream for transmit and receive) optimizes for cost-sensitive, battery-powered designs, making it ideal for mass-market adoption. Key Wi-Fi 7 features baked into the IP include Multi-Link Operation, which allows simultaneous data transmission across multiple frequency bands (2.4 GHz, 5 GHz, and 6 GHz) for seamless aggregation and reduced interference; 4096-QAM modulation for 20% higher throughput than Wi-Fi 6; and enhanced puncturing to dodge congested channels dynamically. These capabilities slash latency to sub-millisecond levels, boost peak speeds beyond 5 Gbps, and enhance reliability in dense environments, crucial for real-time applications like augmented reality glasses or autonomous drones.

What sets this IP apart is its synergy with Ceva’s broader Smart Edge portfolio, particularly the NeuPro family of NPUs. When paired with Wi-Fi 7 connectivity, these NPUs empower devices to process sensor data, run inference models, and make decisions locally at the edge. This on-device intelligence minimizes cloud dependency, fortifying data privacy by keeping sensitive information, like health metrics from a fitness tracker, off remote servers. It also extends battery life by up to 30% through efficient power management and reduces operational costs by curbing data transmission volumes. In essence, Ceva’s solution transforms passive IoT nodes into proactive physical AI agents that perceive their surroundings (via cameras or microphones), reason through AI algorithms, and act autonomously, whether adjusting a smart thermostat based on occupancy or alerting factory workers to hazards.

Tal Shalev, Vice President and General Manager of Ceva’s Wireless IoT Business Unit, emphasized the strategic timing: “Wi-Fi 7’s breakthroughs in speed, resilience, and latency are driving rapid adoption. Our turnkey solution helps customers cut complexity and time-to-market delivering smarter, more responsive IoT experiences powered by edge intelligence.” Already licensed by multiple leading semiconductor firms, the IP has seen swift uptake, underscoring its market readiness. Industry analysts echo this enthusiasm; Andrew Zignani, Senior Research Director at ABI Research, notes, “Wi-Fi 7 is set to transform IoT by enabling the low-latency, high-throughput connectivity required for real-time edge intelligence and Physical AI. Solutions like Ceva’s are critical to bringing these capabilities into cost-sensitive, battery-powered devices.”

The implications ripple across sectors. In consumer wearables, imagine earbuds that not only stream audio but also perform real-time voice-to-text translation without lag. Smart homes could orchestrate ecosystems where lights, locks, and appliances collaborate via mesh networks, anticipating user needs through predictive AI. Industrial IoT benefits from resilient links in harsh environments, enabling predictive maintenance that prevents downtime. For emerging physical AI—think robotic companions or self-navigating vacuums—Wi-Fi 7 provides the deterministic backbone for multi-device orchestration, fostering collaborative intelligence akin to a “swarm” of sensors.

B0ttom Line: Ceva’s move positions it as a linchpin in the Wi-Fi 7 rollout, with over 60 licensees already harnessing the CEVA-Waves family for diverse applications. As edge computing surges, this IP doesn’t just connect devices; it imbues them with agency, paving the way for a future where AI seamlessly bridges digital and physical realms. By democratizing advanced connectivity, Ceva accelerates innovation, ensuring that smarter, more intuitive experiences are accessible to all.

Contact CEVA

Also Read:

A Remote Touchscreen-like Control Experience for TVs and More

WEBINAR: What It Really Takes to Build a Future-Proof AI Architecture?

Podcast EP291: The Journey From One Micron to Edge AI at One Nanometer with Ceva’s Moshe Sheier


Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU

Semidynamics Inferencing Tools: Revolutionizing AI Deployment on Cervell NPU
by Daniel Nenni on 11-12-2025 at 8:00 am

SemiDynamics Cervell NPU

In the fast-paced world of AI development, bridging the gap from trained models to production-ready applications can feel like an eternity. Enter Semidynamics’ newly launched Inferencing Tools, a game-changing software suite designed to slash deployment times on the company’s Cervell RISC-V Neural Processing Unit. Announced on October 22, 2025, these tools promise to transform prototypes into robust products in hours, not weeks, by leveraging seamless ONNX Runtime integration and a library of production-grade samples.

Semidynamics, a European leader in RISC-V IP cores, has built its reputation on high-performance, open-source hardware tailored for machine learning. The Cervell NPU exemplifies this ethos: an all-in-one RISC-V architecture fusing CPU, vector, and tensor processing for zero-latency AI workloads. Configurable from 8 to 256 TOPS at INT4 precision and up to 2GHz clock speeds, Cervell scales effortlessly for edge devices, datacenters, and everything in between. Its fully programmable design eliminates vendor lock-in, supporting large language models, deep learning, and high-performance computing with standard RISC-V AI extensions. Whether powering on-device assistants or cloud-scale vision pipelines, Cervell’s efficiency stems from its unified instruction stream, enabling infinite customization without fragmented toolchains.

At the heart of the Inferencing Tools is a high-level library layered atop Semidynamics’ ONNX Runtime Execution Provider for Cervell. Developers no longer wrestle with model conversions or low-level kernel tweaks. Instead, they point to an ONNX file, sourced from repositories like Hugging Face or the ONNX Model Zoo, select a configuration, and launch inference directly on Cervell hardware. Clean APIs handle session setup, tensor management, and orchestration, stripping away boilerplate code and minimizing integration risks. This abstraction sits comfortably above the Aliado SDK, Semidynamics’ kernel-level library for peak performance tuning, offering two lanes: rapid prototyping via the Tools or fine-grained optimization via Aliado.

ONNX Runtime integration is the secret sauce. As an open-standard format, ONNX ensures compatibility across ecosystems, and Semidynamics’ Execution Provider plugs it into Cervell’s vector and tensor units via the Aliado Kernel Library. The result? Plug-and-play execution for thousands of pre-trained models, with validated performance across diverse topologies. No more custom wrappers or compatibility headaches—developers focus on application logic, not plumbing.

To supercharge adoption, Semidynamics includes production-grade samples that serve as blueprints for real-world apps. For LLMs, expect ready-to-run chatbots using Llama or Qwen models, complete with session handling and response generation. Vision enthusiasts get YOLO-based object detection pipelines for real-time analysis, while image classifiers draw from ResNet, MobileNet, and AlexNet for tasks like medical imaging or autonomous navigation. These aren’t toy demos; they’re hardened for scale, with built-in error handling and optimization hooks.

The benefits ripple outward. “Developers want results,” notes Pedro Almada, Semidynamics’ lead software developer. “With the Inferencing Tools, you’re running on Cervell, prototype in hours, then harden for production.” Teams report shorter cycles, predictable latency, and maintainable codebases, ideal for embedding AI in agents, assistants, or edge pipelines. Complementing this is the Aliado Quantization Recommender, a sensitivity-aware tool that scans ONNX models for optimal bit-widths (INT4 to INT2), balancing accuracy and bandwidth without exhaustive trials.

Bottom line: In an era where AI deployment lags innovation, Semidynamics’ Inferencing Tools democratize Cervell’s power. By fusing open hardware with streamlined software, they accelerate the journey from lab to launch, empowering developers to ship smarter, faster products. As RISC-V gains traction in AI, expect this suite to redefine edge inferencing—open, scalable, and unapologetically efficient.

Also Read:

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Vision-Language Models (VLM) – the next big thing in AI?

Semidynamics adds NoC partner and ONNX for RISC-V AI applications


Adding Expertise to GenAI: An Insightful Study on Fine-Tuning

Adding Expertise to GenAI: An Insightful Study on Fine-Tuning
by Bernard Murphy on 11-12-2025 at 6:00 am

AI Model Tuner

I wrote earlier about how deep expertise, say for high-quality RTL design or verification, must be extracted from in-house know-how and datasets. In general, such methods start with one of many possible pre-trained models (GPT, Llama, Gemini, etc.). To this consultants or in-house teams add fine-tuning training, initially through supervised fine-tuning (SFT), refined through reinforcement learning with human feedback (RLHF) and subsequently enhanced/maintained through iterative refinement. ChatGPT claims this is the dominant flow (I incline to thinking them pretty accurate in their own domain). Supervision is through labeling (question/answer pairs). In most cases relying on human labeling alone is too expensive, so we must learn how to automate this step.

 A nice example of SFT from Microsoft

This Microsoft paper studies two different methods to fine-tune a pre-trained model (GPT-4), adding expertise on recent sporting events. The emphasis in this paper is on the SFT step rather than following steps. Before you stop reading because this isn’t directly relevant to your interests, I can find no industry-authored papers on fine-tuning for EDA. I know from a comment at a recent conference that Microsoft hardware groups are labeling design data, so I suspect topics like this may be a safe proxy for publishing research in areas relevant to internal proprietary work.

Given the topic tested in the study, the authors chose to fine-tune with data sources (here Wikipedia articles) added after the training cutoff for the pre-trained model, in this case September 2021. They looked at two approaches to fine-tuning on this corpus, one token-based and one fact-based.

The token-based method for label generation is very simple and mirrors the standard practice for generation per the paper. Here they seed with a manually generated label per the article overview section and prompt to generate a bounded set of labels from the article. The second method (which they call fact-based) is similar except that it prompts the model to break down complex sentences if needed into multiple atomic sentences. The authors also allowed for some filtering in this case to remove facts irrelevant to the purpose of the study. Here also the model was asked to generate multiple unique labels.

The paper describes training trials, run in each case on the full set of generated labels, also subsets to gauge sensitivity to training sample size. Answers are validated using the same model running a test prompt (like a test for a student) allowing only pass/fail responses.

The authors compare accuracy of results across a variety of categories against results from the untuned pre-trained model, their range of scaled fine-tuned options, and against RAG over the same sections used in fine-tuning but based on Azure OpenAI hybrid search. They conclude that while token-based training does increase accuracy over the untrained model, it is not as uniform in coverage as fact-based training.

Overall they find that SFT significantly improves performance over the base pre-trained model within the domain of the added training. In this study RAG outperforms both methods but they get close to RAG performance with SFT.

I don’t find these conclusions entirely surprising. Breaking down complex sentences into individual labels feels like it should increase coverage versus learning from more complex sentences. And neither method should be quite as good as vector-based search (more global similarity measures) which could catch inferences that might span multiple statements.

Caveats and takeaway

Fine-tuning is clearly still a very dynamic field, judging by recommended papers from Deep Research in Gemini and ChatGPT, complemented by my own traditional research (Google Scholar for example, where I found this paper). There is discussion of synthetic labeling, though concerns that this method can lead to significant errors without detailed human review.

One paper discusses how adding a relatively small set (1000) of carefully considered human-generated labels can be much more effective for performance than large quantities of unlabeled or perhaps synthetically labeled training data.

There is also concern that under some circumstances fine-tuning could break capabilities in the pre-trained model (this is known as catastrophic forgetting).

My takeaway is that it is possible to enhance a pre-trained model against training data and modest training prompts and get significantly better response accuracy than the pre-trained model alone could provide. However expert review is important to build confidence in the enhanced model and it is clear that 100% model accuracy is still an aspirational goal.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI RTL Generation versus AI RTL Verification


EDA Has a Value Capture Problem — An Outsider’s View

EDA Has a Value Capture Problem — An Outsider’s View
by Admin on 11-11-2025 at 10:00 am

Figure1 (1)

By Liyue Yan (lyan1@bu.edu)

Fact 1: In the Computer History Museum, how many artifacts are about Electronic Design Automation (EDA)? Zero.

Fact 2: The average starting base salary for a software engineer at Netflix is $219K, and that number is $125K for Cadence; the starting base salary for a hardware engineer at Cadence is $119K (source: levels.fyi)

Fact 3: EDA industry revenue has been 2% of semiconductor industry revenue for over 25 years, and only recently climbed to 3%.

Part 1: Overview 
Starting Point

I started my inquiry on EDA’s value capture issue as a research project, puzzled by the fact that this critical technology has historically captured only 2% of the revenue generated by the semiconductor industry. As an outsider, it was not obvious whether this proportion is reasonable or not. While the question of how exactly EDA is undervalued or overvalued (and under-charging or over-charging) warrants a whole other discussion, it is not surprising that EDA folks are wanting a larger share. When I was strolling around conferences pitching this 2% number to engage people in my research, the responses converged:

When asked, employees from the bigger vendors, particularly senior engineers, expressed strong opinions that the technologies are under-appreciated. They believe that the whole industry should get paid more and they should get paid more. “10%!” some said. Well, that might be too greedy. Some of the engineers questioned what the salespeople were doing exactly that the industry did not get higher revenue. In general, employees from bigger vendors can’t discuss details but requested me sharing my findings when they are complete. Because they want to know. And everybody wants to know, but nobody talks.

Smaller vendors (salespeople, engineers, founders, sometimes that one person playing all these roles), uniformly pointed the finger at the big players: “It is them! Those companies squeeze us by keeping their prices down. It is so unfair. Not only do we suffer, but the whole industry is also underpriced! You should ask them what they are doing!” The expression, the tone, and the structure of their blaming account are shockingly similar, as if they were rehearsed from a secret union whose sole purpose was to recount their suffering to each other. While small EDA companies have no problems sharing with me, they believe they are not so relevant as the industry-level revenue and profit are driven by the big three.

There is an advantage as an outsider. My ignorance provides me courage to throw any random question to any random person who is willing to catch it. When I threw my question at an executive panel, Joe Costello, former CEO of Cadence, responded along the lines of “Profit margin? I don’t think about it. I think about the value we provide through our products. I always believe that if we bring value to the customer, then we will get value.” Oh no.

Wally Rhines, former CEO of Mentor, gave a milder response consistent with what he wrote in his book, which is that he believes that EDA has a healthy long-term relationship with the customers: “I’m convinced that salespeople for the EDA industry work with their semiconductor customers to provide them with the software they need each year, even in times of semiconductor recessions, so that the average spending can stay within the budget” (Predicting Semiconductor Business Trends After Moore’s Law, Rhines, 2019). After a short conversation with Wally, I am convinced that he is a strong market believer, believing that we are close to a good equilibrium (and if not, the market can always sort itself out with any temporary deviation). Except that, first, there can be multiple equilibria, and we don’t know whether we are in a good one that sustains EDA’s innovation, and second, even if the invisible hand can work its magic bringing us to a better point, who knows how long it would take. In the long run, market will correct itself, but “in the long run we are all dead” (Keynes, 1923).

So, it seems that EDA people are thinking a lot about value creation but little about value capture, expecting customers to automatically appreciate their products through paying a fair amount. It turns out I wasn’t imagining this. Charles Shi, senior analyst at Needham, pointed out at DAC 2024 that many EDA folks believe that if they create value, then they will automatically get value, and unfortunately, that is not true. So, I set the goal to understand why EDA got a constant 2% share of semiconductor revenue and how this number was reached. We know in theory this outcome is due to a combination of macro-level market structure and micro-level firm practices, but I would like to know what those practices are and how they contribute to value capture. More specifically, I want to know how firms are selling to maximize their gain. The next question, then, is whom to talk to and what to ask, which is essentially the entire process. The list includes salespeople, engineers, CAD teams, and buyers.

Jumping to the end—a preview of findings

After talking to more than a dozen decision makers, formally or informally, answers quickly emerged. Even if the current number of participants may not be enough to guarantee an academic paper and the project is ongoing, I thought it would be interesting to share some key findings. Most of them should not come as a surprise. Think of it as a mirror (perhaps a slightly distorted one) – from time to time we do need to look into one, though the observations there should not be terribly unexpected.

 

A quick preview of some key findings:

  1. Professional negotiation teams are often used by the buyers, but not by the vendors.
  2. Given the long contract terms, vendors cannot shop customers easily. That means, at contract renewal times, customers have the leverage of alternatives, whereas vendors do not.
  3. Negotiations are almost always done by the end of fiscal year or fiscal quarter, even for private vendors. (Buyers insist this is always the case, while some vendors deny it.)
  4. Heavy quota pressure for sales potentially contributes to heavy discounts.
  5. Customers are only willing to pay a small fraction to small vendors and startups for products that are similar to or even better than those of big vendors.
  6. Bundling practice erodes monopoly rents, giving away value from the better products.
  7. A small number of large customers often account for the majority of a vendor’s revenue.

The above points are mostly around contracting and negotiating, with some around structures and others around practice, and none are in favor of EDA vendors. Then there are some more positive factors:

  1. Historically sticky users.
  2. Low competition pressure from startups.
  3. Customers’ product search is only triggered by pain, never by price/cost.

The seemingly good news for EDA, of system companies as new customers, may not be too good after all. We tend to think that they have bigger budgets and more generous, therefore increasing EDA’s profit margin. Well, they also have shorter contracts in general, training younger engineers who are more comfortable with switching tools. This increases their bargaining power. Additionally, less experienced users require more technical support, increasing costs for the vendors.

In all, industry structure, business model, and contracting practice combine as driving factors for the value capture, which I unpack in the next part.

Part 2 What I Have Learned

What business is Electronic Design Automation (EDA) in?

One can categorize a business through various lenses. To outsiders, one can explain that EDA is in the software industry. To someone who is interested in technologies, one can say EDA is in the semiconductor industry. If I were to explain it to business or economics researchers, I would say that EDA supplies process tools for chip design.

Why does it matter what business EDA is in? First, research studies suggest that stakeholders evaluate products and businesses through the lens of category, in plain words, putting them in a box first so they can be more easily understood and compared to similar offerings and players. If a business cannot be understood, i.e., put in a box, then it risks being devalued. And in this case, if investors and analysts do not know the proper reference group for EDA, then they would not cover it (or provide buy recommendations), and a mere “no coverage” from analysts can negatively affect stock market evaluation. At least that’s what existing studies say.

EDA is really one of its kind, and a tiny market. So, what reference group can an analyst use? What other stocks is the same analyst covering? CAD? The customers and downstream structure can vary a lot. Semiconductors? Yes, they are often covered by the same analyst, but the semiconductor industry is really not a good reference, except to the extent that their revenues are co-influenced by the downstream applications. Who wants to cover it? Nobody, unless you have an EE-related background. So what box do stock analysts put EDA into? The Black Box.

Willingness-to-pay is the second reason one should ask what business EDA is in. Value is in the eye of the beholder, and we should really ask how hardware designers feel about EDA.

The value of a business is often categorized into gain creators and pain relievers. EDA is certainly not the former, which is often driven by individual consumptions that chase temporary joys. People say businesses that leverage the seven deadly sins are the most profitable. Amen! Pride, envy, and lust – social media and cosmetics. Gluttony – fast food, alcohol. Sloth – delivery, gaming, sports watching, etc. Rest assure that EDA satisfies none of these criteria. EDA is not consumable. EDA is not perishable. And EDA is not going to make the user addicted.

So, EDA is more like a pain reliever. Well, if not careful, some engineers may even assert that it is in the “pain producing” business: “Ugly!” “Frustrating!” “Outdated.” “Stupid.” You can hear the pain. But when I pointed out that perhaps the alternative of no tools is more painful, there wasn’t much of an argument. One issue is, we don’t know the counterfactual enough to develop better appreciation. You never know what you have until it’s gone.

All of the above suggest, it is naturally difficult for EDA tools to price up, even without challenges in competition, business model and business practices, which we will dive into next.

But perhaps the discussion should first start with some positive notes. There are a few factors that work in favor of the big EDA vendors, which is 90% of the market:

Sticky users. IC designers are constantly under time pressure. Changing tools means adjustment and adjustment means loss of time. No one wants to change a routine unless it is absolutely necessary, which means EDA vendors can get by as long as they are not doing a terrible job.

Little competition from startups and small players. There were many startups that outdid incumbents and won market share in the past. That time has gone. A combination of lack of investors, increased complexity of problems, and fully exercised oligopoly power has led to a decline of EDA startups. Those few small or new players have been taking the scraps: providing solutions to peripheral problems, taking only a fraction of the price that a big vendor would get, or earning nothing at all if the big vendor decides to provide the competing tool for free in a bundle with other products.

Customers’ product search is only triggered by pain. Customers do not initiate their search for alternative tools just because the existing ones are expensive. That means, there isn’t much price competition pressure once you’ve got the account. But this also means, the pressure is on the initial pricing. Once you lock the customers in, as long as the pain is manageable, the accounts stay.

To the best of my understanding, EDA vendors are leveraging the above factors fairly well, especially factor No. 2, squeezing the small players (which is not without costs). However, there are even more factors that negatively affect the industry’s value capture, among which I rank the incentive design and pricing the top culprits.

Quota

Here goes a story:

“It was 2010, Sep 3rd, just one month away from Sep 30th, the fiscal year end of Company X. The corporate management team sets a goal to book $300 million for next year, and that translates to many sales quotas for its foot soldiers. John works in sales, and his job is to sell all types of tools, to either new or existing customers. John has a yearly quota of $2 mil. He has managed to book $1.4 mil so far, and he has a chance of booking a few new customers for $850k. But he missed the quota last quarter, and if he does not deliver this time, he will be let go. John has a mortgage with a $4k monthly payment and two kids, one is 3rd grade and the other just entering school. John’s wife has not been particularly happy about his recent working hours and heavy travels.

10 Days later, John closed a deal on Sep 13th, bringing his balance down to $300k from $600k. John expected to close the next new customer Company Elppa for $450 – $550k for 20 licenses. The negotiation went on for a week, and the customer stood firm at $350k, claiming that’s their budget. By Sep 21st, John was stressed and asked his manager if it is possible to lower the price to 350. The manager nodded yes to 400, as he was trying to meet his own quota. The eventual deal was $400k with promised additional support to the new customers. John was relieved. The customer was happy. The management was ok with this season’s performance. The licensing price for Elppa effectively dropped from the target $550k to $400k, by a percentage of 27%. Hopefully this gap can be closed in the future.

Two years later, John moved to Company Y. His accounts were left with Ron. Ron couldn’t find a way to increase the price by more than 5% with Elppa since Elppa was also trying to buy more licenses. Ron ended up charging a 5% price increase for 10 more licenses. The gap was never closed.”

This is absolutely a made-up story. Except that there was a time an employee must be let go if he has not met the quota for two consecutive quarters, and that customers do always want to negotiate by the end of a quarter to leverage this quota pressure, and that it is difficult to increase price for an established account. And the mortgage probably is four thousand a month. Rumor has it some tools are given for free in the bundle to attract clients, and the units for those tools have become unsustainable as a result. Rumor also has it there was once a 98% discount from the listing price at a desperate time (compared to the usual 60-80% discount rate). While the discontinuous incentive design, the quota system that are used by all vendors, can increase sales on some occasions, it can also point effort in the wrong direction, especially when the quota is always the total dollar amount.

The story depicts some issues that are essential to EDA’s value capture problem, given the current business model.

Negotiation

Most customers make multi-year orders. This type of contracts brings comfort to both sides. Customers can now focus on their work and develop routines, and vendors have the confidence that they will not starve in the next season. But this also means, customers’ budget is mostly locked with their existing contracts. In addition to limited customer turnover, EDA vendors can also expect few new clients. This puts vendors in a weak position in contract negotiation. With potential new customers locked into their existing deal, vendors have few alternatives equivalent to any existing customer at the time of negotiation. In contrast, customers can always switch vendors, despite the difficulty in executing a switch. This results in imbalanced negotiation power. This imbalance is particularly true when: (1) the customers have already been using competing tools; (2) the customers are young and adaptable; (3) the customers are big.

Customers negotiate with a few EDA vendors; vendors negotiate with hundreds of customers. In these repeated negotiations, big customers largely use professional negotiators who do nothing but negotiate contracts, whereas vendors have their street-smart people-orientated engineer-turned-sales. No matter how awesome these salespersons are, it is hard to argue there is no skill difference, not to mention the quota pressure. And yet, these are the only moments that any value created by EDA is cemented into revenue.

Bundling

Bundling is often considered a highly effective tool for price discrimination, able to maximize value capture by hitting each customer’s willingness-to-pay (WTP) for a set of products. It works because each customer has different levels of WTP for any specific product, but the variance is largely cancelled out when a whole bundle of products is offered together. The price for the bundle is usually fixed.

But bundling in EDA is nothing like the instrument used in typical pricing strategy, despite sharing the same name. In fact, there is no point in using the word “bundle” at all; a more accurate description would be “there is no fixed price, you can buy anything you want, we hope you try our other products, we will give you a discount when you buy more, and we just charge them all together.”

So how exactly this practice is harming EDA businesses?

One case is when the client wants to purchase a competing product of A, possibly superior, the sales may say, if you buy our B and C, we can give you A for free!

This sort of “bundle” is essentially bad pricing competition that squeezes out the small players, and big vendors themselves have to bear the consequences of low pricing for a long time due to locked-in contracts, as we discussed earlier.

The bigger vendors also did not develop mutual forbearance against each other to avoid competing on the same product features. Business school classrooms like to use the Coca-Cola vs. PepsiCo example through which students develop the understanding that both companies are so profitable largely because they do not compete on price, but on perceived differentiation through branding. It is possible to have different strengths instead of all striving for a complete flow.

The “bundle” practice also obscures the value of each tool. Instead of using A1 to compete with A2, a company can use A1, B1, C1 together to compete with any combination of competing tools. When you offer A1 for a low price, you are effectively eroding the profit from B1 or C1, even if perhaps one of them has monopoly power. As for how much value is given away, only these vendors can tell with their data.

Industry structure

One determining factor here is the industry structure of EDA and their downstream. Just the big three EDA firms account for 90% of market share. That market concentration should come with high market power. Well, if you look at the customers of any big vendors, the semiconductor companies, perhaps two or three can make up 70% of their revenue. Not so much power for the vendors after all.

Business model?

Many think the business model is the problem. I am not so sure. The common argument is that even though EDA provides critical tools to enable any downstream chip design and applications, the current business model does not allow it to get a fixed share of the final value created. Some others say, unless EDA can find a way like Silicon IP, charging fee per production unit, the business won’t be sustainable.

Let’s take a look at the underlying logic of these arguments. This is equivalent of saying, the university should charge students a fixed percentage of their future incomes; otherwise it is not fair, since the university degrees enable their careers. Or, power tool manufacturers should charge customers based on the value of house they build. Even better, the coffee you bought this morning woke you up and help you close a $2 million deal, so pay the coffee shop 1%. It does not make sense.

But this is how people are thinking, and the pricing logic for EDA vendors follows: We guess the customers’ revenue and use that to price discriminate; we believe we should charge customers with simple designs or low value application lower fees and increase the number when the design complexity and downstream revenue increase. Whether consciously or not, many vendors are on the same page in this pricing logic.

There are two parts to this logic. One line of reasoning is that once a tool is provided, it can be used for 100 hours, or 1000 hours, and of course they should charge more for the ones used more. This part seems somewhat reasonable because the vendor is essentially providing more tools for the heavy users, even though there is little or no additional cost incurred by the vendor. A solution is cloud and usage monitoring, which could be implemented with time.

The other argument is that some customers use the tool to produce 1 million chips whereas some others only produce 10 thousand. Shouldn’t one charge the former more per license, given that the tools enable them to achieve a higher revenue? I do not believe so. The tool should charge a customer up to the added value – in this case, how much cost it saves the customer compared to the alternatives, which also decides the customer’s maximum willingness-to-pay – and charge at least its own production costs (in this case, its own costs for maintenance). As for where exactly the price lands in this range, it depends on competition and negotiation, which are discussed above in the industry structure and negotiation sections.

So, what are the possible remedies that can improve EDA’s value capture? 

Part 3 Remedies

Three remedies are proposed:

  1. Better incentive design
  2. Smaller accounts (but more of them)
  3. Stay in different lanes

The first proposed remedy focuses on the incentive issues in negotiation, the second on negotiation power, and the third on the willingness-to-pay and negotiation power resulting from market structure.

Better incentive design

The only opportunity EDA has to capture its value is the moment a deal is made. Who makes the deal and how it is made determine two years of revenue from that contract. I am not an expert in contracting but it does not take an expert to see that the total-dollar-amount based quota distorts incentives.

Here is what big vendors could do without changing much of their existing business model:

Hire one or two experts in sales incentives design. Ideally, they have a PhD in Economics or Finance. They can do data work and some simple formal modeling. They have experience or at least appreciation for experimental and behavioral economics. They could be just out of graduate school, or currently working in Amazon dealing with pricing models that never leverage much of their real training. Currently, the big vendors employ hundreds of engineers with PhDs, but only hire a few staff with a BS or MS for performance analytics and pricing. No, let an Econ PhD work, and they will be worth more than 4 bachelor’s.

It is best to hire them directly instead of sourcing from consulting firms, as the sales performance data needs to be reviewed constantly, and incentive schemes may need to be adjusted. However, it would be reasonable to first evaluate whether there is a real need through economic consulting groups.

I imagine EDA firms could use the same type of people for pricing.

In any case, any incentive scheme should consider a quota not just based on the total dollar amount. Realized price per license needs to be incorporated into incentive formulas as well.

Smaller accounts

This suggestion is not about getting new and small contracts. It is changing the situation where two or three customers make up 70% of an EDA vendor’s revenue, so it has little negotiation power in each contract. Obviously, EDA cannot change the semiconductor industry structure, but it can change the concentration of its contracts. That is, breaking down bigger contracts to smaller ones so each one is not as critical. Essentially you are treating one customer as twenty customers. This is also aligned with EDA’s price discrimination strategy based on complexity and the total value created. Different projects of a customer can have different levels of complexity and produce at different scales.

What’s the benefit for customers when this can increase their contracting and negotiation costs, not to mention the vacancy time of each license (though we can expect that the pricing can eventually be based on actual run time)? Clean separation for budgets allocated to different products. Accountability at the project or business unit level.

Stay in different lanes

Stay in different lanes, or at least have separate strengths. Save the effort in working on one’s shortcomings and divert those resources into innovation around one’s unique strengths. This also extends to vendors’ recent diversification into other areas, such as IoTs and Automotives. With new realms open for innovation, this could be a time to a reset the mode of competition.

The above remedies are brief by design. They are not comprehensive solutions, but practical ideas meant to prompt a rethink of how EDA approaches value capture. EDA’s non-stop innovation is vital, but sustaining the field and keeping it attractive to talent requires taking value capture just as seriously.

Professor Liyue Yan is a researcher of strategy and entrepreneurship at Boston University’s Questrom School of Business. Her work examines strategic decision-making and entrepreneurial entry, with ongoing projects focusing on the Electronic Design Automation (EDA) industry.

Also Read:

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

AI RTL Generation versus AI RTL Verification

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day


WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity
by Daniel Nenni on 11-11-2025 at 8:00 am

multistream webinar banner square

In the race to power ever-larger AI models, raw compute is only half the battle. The real challenge lies in moving massive datasets between processors, accelerators, and memory at speeds that keep up with trillion-parameter workloads. Synopsys tackles this head-on with its webinar, How PCIe Multistream Architecture is Enabling AI Connectivity at 64 GT/s and 128 GT/s, set for November 18, 2025, at 9:00 AM PST. This 60-minute session, led by Diwakar Kumaraswamy, a veteran SoC architect with over 15 years in high-speed interconnects, targets engineers and system designers building the next wave of AI infrastructure.

REGISTER NOW

At the heart of the discussion is PCIe Multistream Architecture, a fundamental redesign that breaks away from the single-stream limitations of earlier PCIe generations. In traditional PCIe, all data packets, whether from storage, networking, or GPU memory, share a single serialized path. This creates bottlenecks during bursty AI traffic, such as gradient updates in distributed training or real-time inference across multiple streams. Multistream changes the game by allowing multiple independent data flows to travel in parallel over the same physical link. Each stream gets its own error handling, ordering rules, and quality-of-service controls, dramatically improving throughput and reducing latency.

The webinar will contrast this with legacy designs and show how Multistream unlocks the full potential of PCIe 6.0 (64 GT/s) and PCIe 7.0 (128 GT/s). At 64 GT/s, a single x16 link delivers 256 GB/s bidirectional bandwidth, enough to feed an entire rack of GPUs without throttling. Double that to 128 GT/s in PCIe 7.0, and you’re looking at 512 GB/s per link, a leap that makes disaggregated AI architectures viable. Think GPU clusters spread across racks, NVMe storage pools serving petabytes to language models, or 800G Ethernet backhauls, all connected with microsecond-level coherence.

Diwakar will dive into the mechanics: how 1024-bit datapaths at 1 GHz clocks enable 2x performance across transfer sizes, how PAM4 signaling and FLIT-based error correction tame signal loss over long traces, and how adaptive equalization in the PHY layer keeps power in check. He’ll also cover practical integration, linking user logic to Multistream controllers, managing timing closure in complex SoCs, and verifying compliance with advanced test environments.

For AI system designers, the implications are profound. Multistream minimizes jitter in all-to-all GPU communication, accelerates model convergence, and supports secure multi-tenancy through per-stream encryption. It enables chip-to-chip links in SuperNICs, smart SSDs, and AI switches, all while cutting idle power by 20–30% through efficient low-power states. This isn’t just about speed—it’s about building scalable, sustainable, and secure AI platforms.

Bottom line: As PCIe 8.0 begins to take shape on the horizon, this webinar positions Multistream as the cornerstone of future AI connectivity. Whether you’re designing edge inference engines or exascale training clusters, understanding this architecture is no longer optional, it’s essential. The session promises not just theory, but actionable insights to future-proof your designs in an AI-driven world.

REGISTER NOW

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


A Six-Minute Journey to Secure Chip Design with Caspia

A Six-Minute Journey to Secure Chip Design with Caspia
by Mike Gianfagna on 11-11-2025 at 6:00 am

A Six Minute Journey to Secure Hardware Design with Caspia

Hardware-level chip security has become an important topic across the semiconductor ecosystem. Thanks to sophisticated AI-fueled attacks, the hardware root of trust and its firmware are now vulnerable. And unlike software security, an instantiated weakness cannot be patched. The implications of such vulnerabilities are vast and quite expensive to repair.  How all this happened, what is known about the resultant weaknesses, how AI fits into the picture, and how to add expert-level security hardening to your existing chip design flow are all big questions to ponder.

A coherent view of the big picture with a clear path to secure chip design has been hard to find, until now. Caspia Technologies recently released a series of three short videos that explain what’s happening to chip security, why it’s happening and what can be done about it. Links to these videos are coming. You can watch all of three of them in under six minutes and your investment in time will show you why more secure chip design is so important and how to achieve it. So, let’s take a six-minute journey to secure chip design with Caspia.

Chapter 1 – The Elephant in the Room: Weak Chip Security

The first video frames the problem from a big-picture point of view. Why the hardware root of trust has become vulnerable and what it means for chip design are explored. The graphic at the top of this post is from this first video. What we see here is the changing landscape of DevSecOps.

For many years this high-growth industry focused on software security. Code was analyzed, weaknesses were identified, and software updates were developed and tested to increase the robustness of the code. Underlying this entire segment was the assumption that the hardware root of trust was secure and immutable. And for a long time, this was true. Some of the forces that made hardware vulnerable to attack are discussed in this segment. The result is an emerging segment of DevSecOps that focuses on fortifying the security of the hardware.

This is the sole domain of Caspia Technologies.

Chapter 2 – Identify Security Threats Before They Harm You

The second video takes a deeper dive into the chip security problem. Specific examples of hardware weaknesses and the resultant impact are taken from recent headlines. The applications cited will be familiar to all. Security risks are closer to home than you may realize.

This video also showcases the substantial progress being made across the semiconductor ecosystem to understand these new security risks. Two examples of how government, industry, and academia collaborate to track and categorize security risks are presented. These efforts form the foundation for finding and fixing security risks. The diagram below illustrates some of the details discussed.

Collaborative Efforts to Track Security Risks

Caspia Technologies and its founding team at the University of Florida in Gainesville have been pioneering catalysts for this work. The details of these efforts and its impact are also touched on.

Chapter 3 – How GenAI Adds Expert Security to Existing Design Flows 

In this final installment, approaches to apply GenAI technology in novel ways to create breakthrough security verification are presented. This video explains how GenAI capabilities can be harnessed to deliver expert level security verification to existing design teams and flows. The graphic below summarizes some of the relevant qualities that pave the way to new approaches.

GenAI Enhanced Verification Breakthrough

The specific ways Caspia Technologies uses GenAI are detailed, with examples of how Caspia’s AI agents work together to ensure new chip designs are robust and secure against a growing threat profile.

This is the future of chip design.

To Learn More

If chip-level security is a concern (and it should be), I highly recommend investing six minutes to allow Caspia to show you the path to a more secure future. The insights are valuable and actionable.

Here is where you can view each chapter of the story:

Chapter 1

Chapter 2

Chapter 3

You can also find out more about Caspia and its impact on the industry on SemiWiki here.  And that’s a six-minute journey to secure chip design with Caspia.

Also Read:

Large Language Models: A New Frontier for SoC Security on DACtv

Caspia Focuses Security Requirements at DAC

CEO Interview with Richard Hegberg of Caspia Technologies


Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution

Lessons from the DeepChip Wars: What a Decade-old Debate Teaches Us About Tech Evolution
by Lauro Rizzatti on 11-10-2025 at 10:00 am

Lessons from the DeepChip wars Table

The competitive landscape of hardware-assisted verification (HAV) has evolved dramatically over the past decade. The strategic drivers that once defined the market have shifted in step with the rapidly changing dynamics of semiconductor design.

Design complexity has soared, with modern SoCs now integrating tens of billions of transistors, multiple dies, and an ever-expanding mix of IP blocks and communication protocols. The exponential growth of embedded software has further transformed verification, making early software validation and system-level testing on emulation and prototyping platforms essential to achieving time-to-market goals.

Meanwhile, extreme performance, power efficiency, reliability, and security/safety have emerged as central design imperatives across processors, GPUs, networking devices, and mobile applications. The rise of AI has pushed each of these parameters to new extremes, redefining what hardware-assisted verification must deliver to keep pace with next-generation semiconductor innovation.

Evolution of HAV Over the Past Decade

My mind goes back to the spirited debates that played out on the pages of DeepChip in the mid-2010s. At the time, I was consulting for Mentor Graphics, proudly waving the flag for the Veloce hardware-assisted verification (HAV) platforms. My face-to-face counterpart in those discussions was Frank Schirrmeister, then marketing manager at Cadence, standing his ground in defense of the Palladium line.

Revisiting those exchanges today underscores just how profoundly user expectations for HAV platforms have evolved. Three key aspects of deployment, once defining points of contention, have since flipped entirely: runtime versus compile time, multi-user operation, and DUT debugging. Let’s take a closer look at how each has transformed.

Compile Time versus Runtime: A Reversal of Priorities

A decade ago, shorter compile times were prized over faster runtime performance. The prevailing rationale, championed by Cadence in the processor-based HAV camp, was that rapid compilation improved engineering productivity by enabling more iterative RTL debug cycles per day. In contrast, the longer compile times typical of FPGA-based systems often negated their faster execution speeds, creating a significant workflow bottleneck.

Over the past decade, however, the dominant use case for high-end emulation has shifted dramatically. While iterative RTL debug remains relevant, today’s most demanding and value-critical tasks involve validating extremely long software workloads: booting full operating systems, running complete software stacks, executing complex application benchmarks, and, increasingly, deploying entire AI/ML models. These workloads no longer run for minutes or hours, instead they run for days or even weeks, completely inverting the equation and rendering compile time differences largely irrelevant.

This fundamental shift in usage has decisively tilted the value proposition for high-end applications toward high-performance, FPGA-based systems.

The Capacity Driver: From Many Small Jobs to One Long Run

Back in 2015, one of the central debates revolved around multi-user support and job granularity. Advocates of processor-based emulation systems argued that the best way to maximize the value of a large, expensive platform was to let as many engineers as possible run small, independent jobs in parallel. The key metric was system utilization: how many 10-million-gate blocks could be debugged simultaneously on a billion-gate system?

While the ability to run multiple smaller jobs remains valuable, the driver for large-capacity emulation has shifted entirely. The focus has moved from maximizing user parallelism to enabling a single, system-critical pre-silicon validation run. This change is fueled by the rise of monolithic AI accelerators and complex multi-die architectures that must be verified as cohesive systems.

Meeting this challenge demands new scaling technologies—such as high-speed, asynchronous interconnects between racks—that enable vendors to build ever-larger virtual emulation environments capable of hosting these massive designs.

The economic rationale has evolved as well: emulation is no longer justified merely by boosting daily engineering productivity, but by mitigating the catastrophic risk of a system-level bug escaping into silicon in a multi-billion-dollar project.

The Evolution of Debug: From Waveforms to Workloads

Historically, the quality of an emulator’s debug environment was defined by its ability to support waveform visibility of all internal nets. Processor-based systems excelled in this domain, offering native, simulation-like access to every signal in the design. FPGA-based systems, by contrast, were often criticized for the compromises they imposed, such as the performance and capacity overhead of inserting probes, and the need to recompile whenever those probes were relocated.

That paradigm has been fundamentally reshaped by the rise of software-centric workloads. For an engineer investigating why an operating system crashed after running for three days, dumping terabytes of low-level waveforms is not only impractical but largely irrelevant. Debug has moved up the abstraction stack—from tracing individual signals to observing entire systems. The emphasis today is on system-level visibility through software debuggers, protocol analyzers, and assertion-based verification, approaches that are less intrusive and far better suited to diagnosing the behavior of complex systems over billions of cycles.

At the same time, waveform capture technology in FPGA-based platforms has advanced dramatically. Modern instrumentation techniques have reduced traditional overhead from roughly 30% to as little as 5%, making deep signal visibility available when needed, without imposing a prohibitive cost.

Debug is no longer a monolithic task. It has become a multi-layered discipline where effectiveness depends on choosing the right level of visibility for the problem at hand.

In Summary

Recently, I had the chance to reconnect with Frank—now Marketing Director at Synopsys—who, a decade ago, was my counterpart in those spirited face-to-face debates on hardware-assisted verification. This time, however, the discussion took a very different tone. Both of us, now veterans of this ever-evolving field, found ourselves in full agreement on the dramatic metamorphosis of the semiconductor design landscape—and how it has redefined the architecture, capabilities, and deployment methodologies of HAV platforms.

What once divided the “processor-based” and “FPGA-based” camps has largely converged around a shared reality: design complexity, software dominance, and AI-driven workloads have reshaped the fundamental priorities of verification. The focus has shifted from compilation speed and multi-user utilization toward system-level validation, scalability, and long-run stability. Table I summarizes how the key attributes of HAV systems have evolved over the past decade.

Table: The Evolution of the HAV Main Attributes from ~2015 to ~2025

More importantly, the role of HAV itself has expanded far beyond its original purpose. Once considered a late-stage verification tool—used primarily for system validation and pre-silicon software bring-up—it has now become an indispensable pillar of the entire semiconductor design flow. Modern emulation platforms span nearly the full lifecycle: from early RTL verification and hardware/software co-design to complex system integration and even first-silicon debug.

Also Read:

TCAD Update from Synopsys

Synopsys and NVIDIA Forge AI Powered Future for Chip Design and Multiphysics Simulation

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys


Think Quantum Computing is Hype? Mastercard Begs to Disagree

Think Quantum Computing is Hype? Mastercard Begs to Disagree
by Bernard Murphy on 11-10-2025 at 6:00 am

Just got an opportunity to write a blog on PQShield, and I’m delighted for several reasons. Happy to work with a company based in Oxford and happy to work on a quantum computing-related topic, which you’ll find I will be getting into more deeply over coming months. (Need a little relief from a constant stream of AI topics.) Also important, I enjoy connecting technology to real world needs, things everyday people care about. Mastercard has something to say here.

Security is visceral when it comes to our money

I find in talking about and writing about security in tech that, while we all understand the importance of security, our understanding is primarily intellectual. Yes, security is important, but we don’t know how to monetize it. We do what we can, but we don’t need to get ahead of the game if end-user concern remains relatively low. As an end-user I share that view – until I’m hacked. As happened to my credit card a few weeks ago – I saw a charge at a Panda Express in San Francisco – a 3-hour drive from where I live. The card company removed the charge and sent me new card. Happily it’s been years since I was last hacked. But what if hacks could happen every year, or month, or week?

In their paper Mastercard talk about malicious actors using a “Harvest Now, Decrypt Later” attack paradigm. Building a giant pool of encrypted communications and keeping it in storage until strong enough decryption mechanisms become available. We’re not aware that data we care about is in the hands of bad actors because nothing bad has happened – yet. This is not a theoretical idea. The possibility already exists for systems using DES, RSA-1024 or weaker mechanisms, which is why most though maybe not all weak systems have been upgraded.

The stronger threat comes from quantum computing (QC). You might think that QCs are just toys. Small qubit counts can’t handle real jobs. Your view may be outdated. IBM already have a one-thousand usable qubit computer, Google is planning for a one-million qubit system and who knows what governments around the world can now reach, especially in hacking hotbeds.

OK you counter, but these are very specialized systems. Governments don’t want to hack my credit cards (though I’m not sure I’d trust that assertion). But it doesn’t matter. To build demand, QC centers provide free or moderate-cost access to their systems. All you have to do is download an algorithm, maybe from the dark Web, to factor large integers. Then you can break RSA-encrypted messages using Shor’s or similar algorithms.

In fairness, recent estimates suggest that RSA-2048 may not be broken before 2031. But improvements in quantum error correction are already pushing down that limit. We really can’t be certain when that barrier will be breached. Once it is, the flood gates will open thanks to all that harvested encrypted data. That breach will affect not only credit cards but all electronic payment systems and more generally finance systems. Our intellectual concern will very rapidly become a visceral concern if we are not prepared.

PQShield and quantum-resistant encryption

Mastercard mentions two major mechanisms to defend against quantum attacks: post-quantum cryptography (PQC) and quantum key distribution (QKD). QKD offers theoretically strong guarantees but is viewed currently as a future solution, not yet ready for mass deployment. The Mastercard paper reinforces this position, citing views from multiple defense agencies and the NSA. More immediate defenses are based on QKC, for which PQShield offers solutions today.

Several algorithms have been proposed which NIST is supporting with draft standards. Importantly, National Security System owners, operators and vendors will be required to replace legacy security mechanisms with CNSA 2.0 for encryption in classified and mission-critical networks. CNSA 2.0 defines suite of standards for encryption, hashing and other objectives.

The NIST transition plan projects urgency. New software, firmware and public-facing systems should be upgraded in 2025. Starting 2027 all new NSS acquisitions must be CNSA 2.0 compliant by default. By 2030 all deployed software and firmware must use CNSA 2.0 signatures and any networking equipment that cannot be upgraded with PQC must be phased out. The Mastercard paper talks about plans in other regions which seem not quite as far ahead, though I expect EU enthusiasm for tech regulation will quickly address that shortfall.

PQShield is already well established in PQC. This is a field where customer deals are unlikely to be announced, but other indicators are promising. Their PQCCryptoLib-Core is in “Implementation Under Test” testing at NIST. They are in the EU-funded Fortress project. They have partnered with Carahsoft Technologies to make quantum-safe technology available to US public sector companies. And they have published multiple research papers, so you can dig into their technology claims in detail.

Fascinating company. You can learn more HERE.

Also Read:

Podcast EP304: PQC Standards One Year On: The Semiconductor Industry’s Next Move

Formal Verification: Why It Matters for Post-Quantum Cryptography

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey


CEO Interview with Sanjive Agarwala of EuQlid Inc.

CEO Interview with Sanjive Agarwala of EuQlid Inc.
by Daniel Nenni on 11-09-2025 at 10:00 am

EuQlid Co founders picture

Sanjive Agarwala is co-founder and CEO of EuQlid, a quantum technology company, developing novel 3D imaging tools to support the design and manufacturing of semiconductors and batteries.

Prior to EuQlid, Sanjive served as Corporate Vice President and General Manager of the IP Group at Cadence Design Systems. His business included silicon proven analog, advanced memory interfaces, high-speed SerDes IPs that are all based on industry-standard protocols and Tensilica Processor IP system solutions, with a focus on expanding the Cadence IP portfolio and enabling customers to get their system-on-chip (SoC) products to market faster and with higher quality.

Prior to Cadence, Sanjive was at Texas Instruments, where he led the development of the company’s digital signal processing, artificial intelligence, and digital/analog platform technology, as well as high-end SoCs targeting automotive, industrial, and wireless base-station applications.

Tell us about your company?

EuQlid, a quantum technology company, is developing novel 3D imaging tools to support the design and manufacturing of semiconductors and batteries. Our technology addresses a critical gap in the semiconductor and energy storage industries, visualizing sub-surface currents with precision and speed, where today’s inspection and test tools cannot reach.

What problems are you solving?

The insatiable compute demand of AI is driving semiconductor logic, memory and advanced packaging technologies to adopt complex 3D architectures. New metrology and inspection tools are needed to control and optimize increasingly complex manufacturing workflows. Global demand for advanced metrology and inspection tools exceeds $10 billion annually and is growing rapidly with the adoption of 3D architectures.

Metrology plays a crucial role in semiconductor manufacturing by providing detailed information on the physical properties of silicon and packages. Process engineers make control adjustments to meet specific product parameters, ensuring the production of reliable semiconductor devices of high quality while minimizing waste. Effective metrology is therefore essential for maintaining the economic viability and sustainability of the overall manufacturing process.

EuQlid’s proprietary quantum 3D imaging platform, Qu-MRITM, enables non-destructive mapping of buried current flow with precision and speed that are unmatched in the industry.

What application areas are your strongest?

The next era of semiconductor scaling will be driven by 3D heterogeneous integration (3DHI) and novel 3D logic and memory architectures. EuQlid’s 3D imaging platform meets or surpasses precision and speed requirements to enable measurement of sub-surface currents and thereby determine the integrity of buried and invisible device structures. The semiconductor industry today is valued at over $600B annually and heading to $1T+ by 2030. Inspection and metrology tools have always been an integral part of manufacturing, and demand continues to grow with the increasing complexity of manufacturing workflows.

Similarly, with the “electrification of everything” revolution, energy storage devices are undergoing significant innovation and growth, and will continue to do so for decades to come. Improving battery lifetime, safety and performance requires understanding exactly how and where degradation initiates and propagates. EuQlid’s magnetic imaging platform enables visualization of the spatial and temporal current heterogeneities key to battery health monitoring and charge estimation.

What keeps your customers up at night?

Semiconductor industry leaders like TSMC, Intel and Samsung spend tens of billions of dollars annually in advancing core semiconductor technology and building manufacturing facilities. World economies are dependent on their ability to manufacture and ship flawless semiconductor products. The complexity of these devices and manufacturing flows puts immense pressure on them to stay ahead of the game with best-in-class design and manufacturing tools. They rely on companies like EuQlid to innovate and provide the metrology tools needed to deliver their products.

What does the competitive landscape look like and how do you differentiate?

Metrology is adapting to meet the varied demands and integration techniques required by different 3DHI workflows. Technologies such as X-ray fluorescence (XRF), atomic force microscopy (AFM), ellipsometry, and white-light interferometry provide engineers with unprecedented precision and capabilities, but they also have their limitations. EuQlid is addressing the whitespace in buried interconnect opens, shorts and non-wet defect inspection for in-line metrology applications.

What new features/technology are you working on?

EuQlid is developing the full-stack Qu-MRI platform combining quantum magnetometry with advanced signal processing and machine learning to deliver buried electrical current maps with high-throughput and nano-amp sensitivity without physical contact or destructive cross-sectioning.

The technology platform is being used to analyze customer samples representing their tough problems which require innovative solutions going forward.

How do customers normally engage with your company?

Customers engage with us by reaching out directly through our academic and industry networks. They can also reach us online through our website (euqlid.io) or LinkedIn.

Also Read:

CEO Interview with Wilfred Gomes of Mueon Corporation

CEO Interview with Alex Demkov of La Luce Cristallina

CEO Interview with Dr. Bernie Malouin Founder of JetCool and VP of Flex Liquid Cooling


CEO Interview with Rodrigo Jaramillo of Circuify Semiconductors

CEO Interview with Rodrigo Jaramillo of Circuify Semiconductors
by Daniel Nenni on 11-09-2025 at 8:00 am

Rodrigo Jaramillo CEO, Circuify Semiconductors Nov 2025

With over 18 years of experience in the semiconductor industry, Rodrigo Jaramillo is the Co-Founder and CEO of Circuify Semiconductors, an engineering design solutions startup based in Guadalajara, Mexico. Circuify provides ASIC, SoC, and Chiplet design services for the North American semiconductor industry, with experience covering the full design flow from architecture to tape-out in advanced silicon nodes down to sub-5 nm and “More than Moore” technologies such as Chiplets, 2.5/3D integration, and advanced packaging.

Prior to founding Circuify, he was a Senior ASIC Design Engineer at AMD, where he worked on power consumption analysis for commercial microprocessors. Before that, he was an ASIC R&D Engineer with Intel Labs, developing experimental chips in pioneering silicon technologies with a focus on ultra-low power design and PVT variation challenges. He began his career at Intel as a Physical Design Engineer, where he led a design group for memory implementations in the server processor family.

Tell us about your company.

We are proudly the first entrepreneurial semiconductor solutions provider from Mexico, delivering chip and IP design services for the global high-tech industry.

Founded in 2017 in Guadalajara, Mexico, Circuify Semiconductors was established by a team of highly skilled industry experts with more than 100 combined years of hands-on experience in IP and IC design.

Over 50% of our executive and engineering teams hold Master’s or PhD degrees in Electronic Engineering, complemented by extensive real-world experience designing ICs and IPs for applications including High-Performance Computing, Artificial Intelligence (AI), Internet of Things (IoT), and Embedded Systems. Our expertise spans advanced silicon nodes, including FinFET technologies down to 5nm and below.

What problems are you solving?

Our mission extends well beyond delivering world-class chip and IP design projects. We are focused on building Mexico’s and Latin America’s semiconductor ecosystem as well as cultivating the next generation of engineering talent.

  1. Ecosystem Development: We are helping shape Guadalajara into the “Silicon Valley of Latin America.” The Mexican government has recognized our leadership in this effort and is collaborating with us to expand local infrastructure, education, and innovation capacity—with Circuify Semiconductors at the forefront of this critical mission.
  2. More than Moore technologies: Working with innovative and disruptive silicon startups on solutions in chiplets (RTL-to-GDSII), 2.5D and 3D integration including logic, cores and HBM phys, high-speed data transmission (die-to-dies), to meet the growing demands for performance, power efficiency and advanced physical sign-off requirements of advanced silicon process technologies.
  3. Leading-Edge Analog IP Development: In collaboration with select foundries, we are developing complex analog IPs for targeted customers, helping them differentiate their end solutions. This in-house IP strategy is a key differentiator for Circuify Semiconductors.
  4. Research & Development: We are engaged in multiple SoC and Analog-Mixed-Signal (AMS) R&D projects with partner universities and have published our work in IEEE and global industry conferences.
  5. Education & Integration: Through academic collaborations, we integrate real-world ASIC/SoC/Chiplet design into university programs, preparing students to meet the growing talent demand in Mexico’s semiconductor ecosystem, particularly in the Guadalajara’s hub.
What application areas are your strongest?

Our strongest capabilities lie in Chiplet, Design Functional Verification and Analog-Mixed-Signal (AMS) design, leveraging the latest industry-standard EDA tools and methodologies.

We are also a leader in Physical Design (RTL-to-GDSII), specializing in complex floor planning, timing closure, and full physical verification sign-off for sub-5nm projects.

We collaborate closely with foundries specializing in AMS technologies and are developing proprietary in-house IPs to bring further differentiation and value to our customers’ products.

What keeps your customers up at night?

For most of our clients, the biggest challenge is meeting aggressive development and verification schedules without compromising on quality or performance. Additionally, the exponentially increasing cost of design and tape-out at advanced nodes is a major concern. We mitigate this through an efficient and cost-effective operational model.

That’s where our flexible engagement model, deep technical expertise, and reliable delivery help them stay on track—and sleep better at night.

What does the competitive landscape look like, and how do you differentiate?

The global semiconductor services industry is highly competitive, but Circuify Semiconductors stands apart through a combination of technical depth, communication efficiency, and regional advantage.

We offer:

  • A unique IP development strategy that enables customers to differentiate their end products.
  • Fluent English communication and same-time-zone collaboration, ensuring seamless interaction with North American clients.
  • A world-class engineering team with experience at Intel, AMD, NXP, and other leading semiconductor companies.
  • A deep commitment to partnership — we don’t just deliver designs; we co-create solutions and invest in our clients’ success.
  • A mission-driven approach that empowers local talent and strengthens the semiconductor value chain in Mexico and Latin America.
What new features or technologies are you working on?

At Circuify Semiconductors, we are advancing complex Analog-Mixed-Signal (AMS) and next-generation digital design solutions. Current R&D areas include:

  • Chiplet architectures for heterogeneous integration
  • Custom AMS IP development for high-performance and low-power applications
  • Design automation and verification frameworks to accelerate time-to-market

More importantly, we see ourselves not just as a design services provider, but as a strategic innovation partner—helping our clients innovate faster.

How do customers normally engage with your company?

We work with three main categories of clients:

  1. Full Collaboration Partners: Companies that engage us early at the architecture level, enabling true co-design and system-level optimization from concept through tape-out.
  2. Block-Level Partners: Clients who rely on our expertise for specific IP blocks or subsystems within larger designs.
  3. Project Acceleration Clients: Companies that require specialized analog or mixed-signal engineers to help meet critical deadlines or overcome verification bottlenecks.

While we value all types of collaborations, our preference is Category 1, where early engagement allows us to contribute to system modeling, performance analysis, and design strategy—maximizing overall impact and project efficiency.

Also Read:

Podcast EP315: The Journey to Multi-Die and Chiplet Design with Robert Kruger of Synopsys

Podcast EP313: How proteanTecs Optimizes Production Test

Chiplets: Powering the Next Generation of AI Systems