SNPS1670747138 DAC 2025 800x100px HRes

2025 Outlook with Veerbhan Kheterpal of Quadric

2025 Outlook with Veerbhan Kheterpal of Quadric
by Daniel Nenni on 02-18-2025 at 10:00 am

Veerbhan Kheterpal

Quadric Inc. is the leading licensor of general-purpose neural processor IP (GPNPU) that runs both machine learning inference workloads and classic DSP and control algorithms.  Quadric’s unified hardware and software architecture is optimized for on-device ML inference. Veerbhan Kheterpal is the CEO and one of the co-founders of Quadric.

Tell us a little bit about yourself and your company.

Quadric is a startup processor IP licensing company delivering a unique general-purpose, programmable neural processor (GPNPU) IP solution. In a marketplace with more than a dozen machine learning “accelerators,” ours is the only NPU solution that is fully C++ programmable. This means that it can run any and every AI/ML graph without the need for any fallback to a host CPU or DSP.  I am one of the three co-founders of the company, which we started back in late 2017.

What was the most exciting high point of 2024 for your company?

2024 was a year of tremendous momentum building for Quadric.  We introduced and began customer deliveries of the 2nd generation of our Chimera GPNPU processor, which now scales to over 800 TOPs.  We also dramatically expanded the size of our model zoo (available in our online DevStudio at our quadric.io website) from only 20 models at the start of the year to over 200 by the close of 2024 thanks to the rapid maturation of our compiler stack.  And that maturation of the technology was accompanied by a growing customer base, that included a public announcement by Denso Corp of Japan that they are basing their future ADAS systems on our GPNPU processors.

What was the biggest challenge your company faced in 2024?

2024 witnessed the beginning of the thinning of the herd of rival NPU architectures in the marketplace.  Far too many IP startups and large IP players all launched AI/ML accelerator efforts from 2020 through 2023 – too many for the market to support them all.   Quadric’s goal in 2024 was to gain the necessary traction in the market to become one of the survivors of the shakeout, and to thrive amid that big shakeout.

Indeed, we did thrive!  We grew the team. We grew the customer base.  We greatly expanded the scope of the product.  We grew revenues by over 600%. And we just kicked off 2025 by forming a Japanese K.K. subsidiary and opening a physical office in Japan – so I think we met the challenge of 2024 head-on and succeeded!

How is your company’s work addressing this biggest challenge?

The challenge of surviving the inevitable thinning of the herd was really a question of highlighting and enhancing the thing that makes Quadric unique among AI/ML solutions: the Quadric Chimera GPNPU is 100% programmable, capable of running any and all ML workloads.  In detailed evaluation after detailed evaluation, we shined by rapidly porting dozens of new, leading-edge models to the platform – many of which compiled straight out of the box with no manual intervention.  The resulting massive increase in the size of the known-working model zoo throughout 2024 cemented in customers’ minds that the promise of full programmability with high performance was a reality, not just a promise.

What do you think the biggest growth area for 2025 will be, and why?

Every year we marvel at how fast AI models changed the previous year.  And every year the industry buckles up for a wild ride to see how much change hits in the coming year.  2025 is no different – in fact, 2025 has already seen huge changes.  DeepSeek so disrupted the conventional wisdom that stock markets quaked, politicians pontificated, and technologists paused to wonder.  And that was only within the first 4 weeks of the year!

These changes won’t slow down.  Look at the automotive market, for instance.  One year ago none of the leading OEMs and Tier 1s had vision-language models (VLMs) on their Must Have list.  VLMs at the time were barely registering in academia.  Now, VLMs are fast becoming requirements.

How is your company’s work addressing this growth?

Quadric welcomes rapid change in AI models.  Quadric processors can run every ML operator, every ML graph.  The more change the better for our business!   Quadric is continuously adding ports of new algorithms to our processors. Today we support all the major modalities of ML inference, including a variety of leading-edge transformers.  Adding a demonstration of a new ML model is a pure software effort for us, and we are focused in 2025 on widening the array of models further with each periodic software release.

What conferences did you attend in 2024 and how was the traffic?

In 2024 we attended quite a few smaller, focused technical conferences: Embedded Vision Summit, IPSoC, ACC, Innovex (part of Computex), Design Solution Forum, EdgeTech, and we held two of our own private seminars.   Those tailored conferences were robustly attended and the big mega shows – such as CES – also did well.

Will you attend conferences in 2024? Same or more?

Quadric will be expanding our outreach marketing programs in 2025 commensurate with our business growth.  Look for us to be at more events and more prominent sponsors of those venues that bring together the chip and systems architects that make programmable processor IP decisions.

How do customers engage with your company?

The first step is easy: visit our online DevStudio at www.quadric.io.  We have hundreds of benchmark performance figures – and full source code of all those benchmarks – right on the website in Studio.  And our worldwide sales and applications team stands ready to follow-up with training and support to help you decide if Quadric’ Chimera GPNPU processor is right for your next SoC design.

Also Read:

Tier1 Eye on Expanding Role in Automotive AI

A New Class of Accelerator Debuts

The Fallacy of Operator Fallback and the Future of Machine Learning Accelerators


Samtec Advances Multi-Channel SerDes Technology with Broadcom at DesignCon

Samtec Advances Multi-Channel SerDes Technology with Broadcom at DesignCon
by Mike Gianfagna on 02-18-2025 at 6:00 am

Samtec Advances Multi Channel SerDes Technology with Broadcom at DesignCon

There were many announcements and significant demonstrations of new technology at the recent DesignCon. The show celebrated its 30th anniversary this year and it has grown quite a bit. As in past years Samtec had a commanding presence at the show. There will be more about that in a moment, but first I want to focus on a substantial demo that teamed Samtec’s interconnect technology with Broadcom’s SerDes technology for the first time. I have many memories of my time at eSilicon. Some of those memories center on how difficult it was to compete with Broadcom’s SerDes. The demo at DesignCon brought together this substantial capability with Samtec’s industry-leading interconnect to open new horizons. Let’s examine how Samtec advances multi-channel SerDes technology with Broadcom at DesignCon.

Interconnect Technology

The key enabling technology from Samtec for the DesignCon demo with Broadcom was its Si-Fly® HD 224 Gbps PAM4, co-packaged and near chip capabilities. As the name implies, these products offer the system designer flexibility with either co-packaged interconnect with the chip on the same substrate or near-chip interconnect. The die and connector on substrate configuration creates the need for broader ecosystem collaboration since the silicon provider, interconnect provider and OSAT all need to work together to achieve a reliable product. Broader collaboration is a trend in advanced design styles like this.

The image at the top of this post shows what these connectors look like.

To get to high-density 224 Gbps PAM4 channel capability, the co-packaged option offers the lowest loss signal transmission from the package to the front panel or backplane while providing the highest density. Samtec’s Eye Speed® Hyper Low Skew Twinax cable technology supports 224G signaling with an industry leading 1.75 ps/m max intra-pair skew. Digging a bit deeper, placement of Flyover® cable solutions on, or near, the chip package improves transmission line density and extends signal reach in high-performance applications. More information on this technology and the demo is coming.

The Demo

The demo at DesignCon showcased an evaluation platform with Broadcom’s 200 Gbps SerDes technology. The Samtec Si-Fly HD CPC on-package high-speed cable systems and OSFP front panel connectors were used for the interconnect. The Broadcom 200 Gbps chips and these connectors were attached to the package to maximizes system performance.

For those who want the details, here they are for the two demo platforms that were used.

Platform #1

  • Evaluates the performance of the new Si-Fly HD cable assembly
  • The 200 Gbps signal routes through 30 mm of substrate and loops back through 150 mm of Samtec Eye Speed Hyper Low Skew twinax cable
  • BER is e-13, error-free. Total channel loss is 20 dB at 212.5 Gbps

Platform #2

  • Mid-board to front panel and back
  • 200 Gbps signal travels through Si-Fly HD cable assembly (25 cm Eye Speed Hyper Low Skew Twinax) with OSFP front panel connector
  • One meter DAC cable, rated at 224G
  • BER e-9, total channel loss 48 dB at 212.5 Gbps
  • Performance will improve with release of 224G Flyover OSFP

A photo of the demo running live at the show is shown below. Note there is a link coming to a detailed video of this demo done live from the Samtec booth at DesignCon.

Samtec/Broadcom Demo at DesignCon

Samtec at DesignCon

As I discussed previously, Samtec has a tendency to dominate DesignCon. This year was no different. Beyond the compelling demos at the Samtec booth, Samtec products were also featured in demos with its partners throughout the show floor. In particular, there were noteworthy demos at the Rohde & Schwarz and Keysight booths.

Samtec was also quite visible in the technical program with the following contributions.

Panels

  • PCI Express & PAM4: Balancing Silicon & Interconnect Interdependencies for 128 GT/s
  • Expert Discussion: How Will AI Applications Affect High Speed Link Design?

Presentations

  • Reduced Order Geometric Macro Model of PCB Fiberglass Spatial Variation for Skew & Impedance Prediction
  • Transmitter Power Spectral Density Noise Impact for 200 Gb/s PAM 4 per Lane
  • Direct to Substrate 200G-PAM4 Co-Packaged Connectors: Is it a Reality?
  • Beyond 200G: Brick Walls of 400G links per Lane
  • Accurate Adapter Removal in High Precision Low Loss RF Interconnect Characterization
  • Determining the Requirements, Die vs Package vs Board: Multi-Level Power Distribution Network Design

To Learn More

You can learn more about Samtec’s Si-Fly HD co-packaged and near-chip systems here. You can also learn about another 224 Gbps PAM4 effort with Synopsys here. And finally, you can check out the live video of the important demo with Broadcom here.  And that’s how Samtec advances multi-channel SerDes technology with Broadcom at DesignCon.


Outlook 2025 with David Hwang of Alchip

Outlook 2025 with David Hwang of Alchip
by Daniel Nenni on 02-17-2025 at 10:00 am

Dave Huang

Dave Hwang joined Alchip in 2021 as General Manager of Alchip’s North America Business Unit.  He also serves as Senior Vice President, Business Development.  Prior to join Alchip, Dave served as Vice President, Worldwide Sales and Marketing for Global Unichip and in a variety of management and technical roles at TSMC.  He holds a Ph.D. in Material Science and Engineering from North Carolina State University.

What was the most exciting high point of 2024 for your company?

That’s a great question.  It’s been a very hectic year, for sure.  Alchip will, more than likely, achieve over $1 billion in revenue in 2024. That’s a huge milestone for any company.  But the biggest, most important milestones are our 2nm shuttle tape-out in September and our 3DIC design flow readiness, which we will announce in January.

What was the biggest challenge your company faced in 2024?

By far, the biggest challenge has been finalizing the design flow for advanced packaging, which has its own set of unique challenges.  We’ve brought a new level of flexibility to our design platform to accommodate increasingly specific targets for both power consumption and high performance.

How is your company’s work addressing this biggest challenge?

Ultimately, in the end, we introduced, and are now accepting designs, for our 3DIC design flow that has both flexibility and robustness.  Alchip’s silicon-proven 3DIC design flow optimizes 3DIC designs along three critical dimensions: power delivery, die-to-die electrical interconnect, and system-wide thermal characterization.

What do you think the biggest growth area for 2025 will be, and why?

System companies, no doubt have become huge consumers of ASICs.  They see them as critical differentiators, particularly in AI and HPC applications.  We’re fairly aligned with the thinking that, in the not-too-distant future, system company investments in the development of ASICS are estimated to reach over several million AI chips in 2027-2028, creating a $60-$90 billion market.

How is your company’s work addressing this growth?

We are taking a holistic systems approach, developing a design platform to accommodate advanced packaging, advanced chiplet, and advanced process technologies.  We’re doubling down on adding engineering and resources throughout our global design centers to address market-specific HPC, AI, and automotive demand.

What conferences did you attend in 2024 and how was the traffic?

We take a significant presence in all TSMC events … The TSMC Technology Symposiums and OIP.  And we do it globally.  We also exhibit at industry events like Chiplet Summit and the AI Hardware Summit.

Will you attend conferences in 2025? Same or more?

We’ll add the Open Compute Platform to our plan, and we’ll look next to expand our presence in industry events and conferences.

How do customers engage with your company?

There is no one way.  Nearly every company is different, so we take a true ASIC approach and have a strategic flexible engagement model.  This allows companies to engage in multi-ways, through many entry points along the value chain.  Essentially, we customize the ASIC value chain to meet the specific needs.

Additional questions or final comments? 

Without a doubt, the AI/HPC market is the place to be.  The key differentiator will be advanced packaging, which will be absolutely required in all high-performance computing applications.  We see ourselves as leaders in this area.

Also Read:

Alchip is Paving the Way to Future 3D Design Innovation

Alchip Technologies Sets Another Record

Collaboration Required to Maximize ASIC Chiplet Value

Synopsys and Alchip Accelerate IO & Memory Chiplet Design for Multi-Die Systems


Synopsys Expands Optical Interfaces at DesignCon

Synopsys Expands Optical Interfaces at DesignCon
by Mike Gianfagna on 02-17-2025 at 6:00 am

Synopsys Expands Optical Interfaces at DesignCon

The exponential growth of cloud data centers is well-known. Driven by the demands of massive applications like generative AI, state-of-the-art data centers present substantial challenges in terms of power consumption. And AI is poised to drive a 160% increase in data center power demand while also increasing demands on storage and communication efficiency, throughput and latency. Cisco has estimated that ASIC SerDes power consumption has increased 25X over the past decade or so. Something needs to change.

The core communication method for these new data centers is based on optical networking. At the recent DesignCon, Synopsys focused on how to reduce power in data center comms, offering insights, strategies and real solutions. The company is working on an innovative new approach to optical communications that essentially reduces complexity to improve power and latency. There was a presentation on this topic and a live demonstration. I’ll provide some details on what Synopsys presented and demonstrated at DesignCon. I also had a chance to speak live with the presenter to get some of the backstory. Let’s examine how Synopsys expands optical interfaces at DesignCon.

The Presentation

Priyank Shukla

Priyank Shukla, product line director for HPC IP at Synopsys presented Linear Eletro-Optical Interfaces: What, Why, When, and How? in the Chiphead Theater at DesignCon. Priyank is responsible for the deployment of Synopsys’ High Speed SerDes IP in complex SoCs. He has broad experience in analog, mixed-signal design with strong focus on high performance compute SoCs. He actively contributes as an IEEE802.3 voter, playing a pivotal role in shaping industry standards. A graduate of IIT Madras, Priyank has a US patent on low power RTC design.

Priyank described some compelling trends. He cited the 160% increase in data center power demand statistic. A key contributor to this is data communications. He explained that interconnects contribute about 27% of total data center power and interconnect power has increased 46x from 2010 to 2024. To complete the picture, he discussed the trends in how data is communicated. In a word, it’s done with light.

Optical interconnects are becoming crucial in data centers as they address the limitations of electrical copper interconnects in high data rate environments approaching 224 Gbps, where copper’s effectiveness diminishes. This creates a need for denser interconnect networks, which in turn increases power consumption. Optical solutions, however, can extend reach and offer scalability in data center topologies. The industry is moving towards optical interconnects to reduce latency and signal integrity issues, which helps with data center expansion.

So, the question is how to reduce the power demands of optical networking interfaces. Priyank described a direct drive/linear interface to meet this challenge. The term “less is more” comes to mind. A conventional optical interface typically has re-timer logic and a DSP to facilitate reliable communication. These items add parts count, size, cost, and power. It turns out the PHY on the transmission end can do more in advanced nodes.

Priyank explained that the switch ASIC’s PHY can directly drive an optical engine on a pluggable module. This optical engine does not include re-timers or DSPs. It does the job with linear amplifiers. This streamlined approach leads to a more compact and efficient design, making the system less complex and highly functional. The figure below illustrates what this new and simplified architecture looks like.

Direct Drive/Linear Interface (Source: Synopsys, Inc.)

Implementing an approach like this can certainly be done if you’re designing the complete system and all of its components. This is not the case for the companies building massive data centers. These organizations rely on a worldwide supply chain to deliver the required components. So predictable interoperability between vendors to deliver this new capability is required. The next two sections of this post will look at this challenge.

The Demonstration

Developing the specifications required to ensure interoperability between vendors for any complex design is daunting. That is certainly true for linear optical interfaces. I’ll get to some details on that in a moment. But first, let’s look at the proof points that are already available. Synopsys has been demonstrating examples of how its IP works with other vendor’s technology for a while.

At ECOC 2023, Synopsys, in collaboration with OpenLight, a photonics venture formed with Juniper Networks, demonstrated the optical eye performance of a linear electrical-optical-electrical link transceiver. At DesignCon, Synopsys demonstrated its 112G Ethernet PHY IP enabling a linear pluggable optics (LPO) module diagnostic with TeraSignal’s ultra-low power linear driver – the industry’s first optical diagnostic interoperability at 112Gbps. 

Using a digital eye monitor, the transmitted signal was captured and analyzed and then settings were updated to minimize errors. It was shown that the Synopsys 112G Ethernet PHY IP receiver equalizes the incoming signal and achieves a near-zero bit error rate, highlighting its reliability and high performance in data transmission. Below is a photo of the demonstration hardware.

Synopsys and Terasignal Demo DesignCon 2025

The Backstory

I had the opportunity to speak with Priyank Shukla recently. We discussed his presentation at DesignCon and Priyank provided a lot of color regarding what will be needed to make the new direct drive/linear interface broadly available. To achieve this goal, standards will need to be developed regarding how the pieces work together and test equipment and software will need to be built to verify compliance. 

This is a complex process, but the payoff is substantial when you consider the power crisis currently facing large data center build out. Priyank described the OIF 112G-Linear Optical Drive Standard effort that aims to define the electrical standards to ensure linear interoperability. Priyank went on to explain that there will be a need to measure photonic parameters to verify compliance, and a different type of test equipment will be needed to achieve this goal. This represents new investment and opens new markets for test and measurement vendors.

Priyank described some of the new parameters being defined by OIF to validate compliance. These include voltage modulation amplitude (VMA) and electrical eye closure quaternary (EECQ). These are new measurements that are under development. It is expected the standard will be ready later in 2025, so the required test equipment and software needed to measure these parameters is also under development.  Achieving mainstream deployment of direct drive/linear interfaces has brought many parts of the supply chain together.

Beyond 112G, Priyank also described work on a 224G standard. Achieving a direct drive/linear interface at this speed is more difficult and will require yet more innovation and new standards. And beyond these standards, Priyank explained that the PCI SIG is also working on optimized interfaces for PCIe.

My discussion with Priyank provided more detail regarding the complexity of this new interface and why it is indeed worth the effort. I got a better appreciation for the importance of the Synopsys IP and the company’s efforts to collaborate across the ecosystem to make the vision a reality.

To Learn More

TeraSignal issued a press release describing more details about the DesignCon demo. It is entitled, TeraSignal Demonstrates Interoperability with Synopsys 112G Ethernet PHY IP for High-Speed Linear Optics Connectivity and you can read the release here.

You can also learn more about direct-drive electro-optical interfaces from this informative technical bulletin.

And if you missed DesignCon, Synopsys will be showing the TeraSignal demo at the upcoming Optical Fiber Communications Conference and Exhibition (OFC), to be held in early April at Moscone Center in San Francisco. You can find Synopsys in booth 2818.

And that’s how Synopsys expands optical interfaces at DesignCon.


Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules
by Robert Maire on 02-16-2025 at 10:00 am

Robert Maire Semiconductor Advisors
  • News reports that Trump will change CHIPS Act to suit his views
  • We specifically predicted this months ago as deals closed 11th hour
  • Blue states, enemies list & foreign entities likely to get cut
  • Big changes/cuts likely to a program Trump roundly criticized

Reuters: Exclusive: Trump prepares to change US CHIPS Act conditions, sources say

We had said that Trump would likely stop the checks on CHIPS Act for funding even on done deals. For all we know he might even try to claw back checks that are already cashed.

As with everything else we are seeing from the new administration he will likely gut what he doesn’t like or turn it into something that benefits his views.

Blue states, political enemies, foreign firms, China deals – all at risk

The Reuters report suggests that companies with a China angle may get scrutinized or cut. Globalwafers of Taiwan was specifically mentioned. Trump has also accused Taiwan of stealing the Chip industry from the US and may want to seek revenge on Taiwanese companies….maybe even including TSMC.

Projects in Texas and Arizona are likely OK for the most part. Ohio being a swing state likely would make it safer from getting whacked.

Micron in Idaho is likely OK but Schumer sponsored Micron New York will likely get more scrutiny.

Additional projects like the national semiconductor technology center which was planned to go hand in hand with a HIGH NA EUV installation in New York may also be at risk.

Companies doing business in China, such as Intel, mentioned in the Reuters article may also get extra scrutiny.

Basically, as we have seen with other things, the CHIPS Act will get twisted for political advantage

DEI in CHIPS Act likely to DIE

The CHIPS Act had some controversial clauses that we, and many others, thought went too far. Such as guaranteed child care for workers.

Union workers, another clause, is not too bad in our view, as the government has long supported unions, but with Musk around now, unions may not be so safe.

Trump loves to “renegotiate” done deals…..

Trump is famous for renegotiating deals, stiffing contractors, reneging on deals. We are sure Trump will want to improve on every CHIPS Act deal and will likely withhold funding to extract better terms.

Trump doesn’t need a reason or an excuse to change , gut or just plain renege on CHIPS Act deals. Saving money is a core principle that Musk is wielding as a hatchet.

The stocks

Aside from the bad Applied Materials news this evening. This news about the CHIPS Act just adds to the many headwinds facing the industry; China, crappy memory pricing, Intel & Samsung falling behind, weak trailing edge, emerging China competition both in chips and chip equipment, weak PC and mobile phone etc, etc.

Whacking the CHIPS Act does not just impact the $39B associated with it but likely hundreds of billions of projects that CHIPS Act was a catalyst for.

Lets also not forget the goal of bringing back semiconductor dominance to the US. But then again, Trumps view is that we can bring back chips to the US by tariffing the heck out of imported chips. Somehow I don’t see that working.

The CHIPS Act was a nice idea while it lasted……

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), nspecializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC Good QTR with AI and HBM drive leading edge and China is Okay

Consumer memory slowing more than AI gaining


AMAT- In line QTR – poor guide as China Chops hit home- China mkt share loss?

AMAT- In line QTR – poor guide as China Chops hit home- China mkt share loss?
by Robert Maire on 02-16-2025 at 6:00 am

Robert Maire Semiconductor Advisors
  • QTR was just “in-line” but guide was below expectations
  • We think its not just China export rules but share loss as well
  • Leading edge is strong but obviously not enough to offset China
  • Memory remains weak-Foundry (TSMC) is the primary driver
Headwinds slow growth to flat

Applied reported $7.166B in revenues and Non GAAP EPS of $2.38 versus street of $7.16B and EPS of $2.30.

Applied is projecting $7.1B +-$400M and EPS DOWN at $2.30 +-$0.18 versus street expectation of $7.2B and $2.29 in EPS.

Applied is suggesting that the flatness lasts only a quarter or two but we think it likely lasts throughout the year

AMAT blames China export restrictions for 100% of weakness

We think share loss in China adds to the weakness

We continue to hear that domestic Chinese semiconductor equipment makers are taking a larger and larger percentage of WFE sales in China. Data from industry sources appears to clearly support that trend.

Chinese chip makers are doing all they can to avoid buying American tools and are buying more and more domestic tools. This trend is not going to change or reverse any time soon.

We would also add that China has been buying any and all US equipment that wasn’t nailed down in anticipation of restrictions that have finally showed up. Warehouses are likely bursting at the seams with equipment that still needs to be put to use.

So the reality is that China is a “triple play” of restrictions, inventory gluts and domestic tool maker share competition.

Only the inventory glut will improve over time, share loss and restrictions are not likely to get better.

The industry is quickly becoming a monopoly of one… TSMC

Samsung and Intel get further behind….

Although AI is great, it is virtually 100% TSMC as Intel and Samsung have fallen further behind. We don’t see with Samsung or Intel as big capex spenders in the near term.

So its really up to TSMC to carry the flag of AI chips.

This means that AMAT has fewer customers who are spending big……

HBM is great but the rest of memory still sucks…..

As we have stated a number of times, don’t expect memory to ramp overall capex. Applied commented on memory weakness with the obvious exception of HBM.

You have to remember that eventually HBM supply will catch up to demand and that means pricing and investment will both decline.

Eventually, all unique memory types become commodities…..that’s the problem with the memory market, its a constant race to the bottom

2025 looking at a middling 0% to 5% WFE growth Y/Y

We are increasingly thinking that 2025 could be a flat year over 2024. With added headwinds from China and only TSMC at the bleeding edge and memory weak its hard to see where growth is coming from.

We have been warning forever, that the recovery is slower than prior cyclical recoveries, we are clearly seeing that right now.

The stocks

AMAT was down about 5% in the after market which we think is an appropriate reaction.

The headwinds are getting too large for even bullish analysts to ignore so we will likely see a series of cuts in numbers for not just AMAT but across the industry as we get closer to a flat WFE outlook.

There is likely some collateral damage in other equipment names as the weaker outlook from the industry leader settles in

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC Good QTR with AI and HBM drive leading edge and China is Okay

Consumer memory slowing more than AI gaining

If you believe in Hobbits you can believe in Rapidus

 

 


Podcast EP274: How Axiomise Makes Formal Predictable and Normal with Dr. Ashish Darbari

Podcast EP274: How Axiomise Makes Formal Predictable and Normal with Dr. Ashish Darbari
by Daniel Nenni on 02-14-2025 at 10:00 am

Dan is joined by Dr. Ashish Darbari, CEO of Axiomise. Axiomise was founded in 2017 by Dr. Darbari, who has spent over two decades in the industry and top research labs increasing formal verification adoption. At Axiomise, they believe the only way to make formal methods mainstream for all semiconductor design verification is to enable and empower the end-user of formal – the hundreds of designers and verification engineers in the semiconductor industry. Dr. Darbari was joined by Neil Dunlop in 2022. Between Neil and Ashish, the Axiomise leadership team has over 60 years of formal verification experience on various projects.

Dan explores the capabilities, impact and plans of this unique company with Ashish. The various types of training Axiomise offers, from instructor-led, to on-demand to custom are reviewed. Ashish also describes the broad services work Axiomise engages in as well as some powerful, high impact apps the company has developed. Examples include formalISA, which can establish ISA compliance via mathematical proofs for RISC-V processors.

The footprint app is also discussed, which provides an efficient and fast method for identifying redundant design components, allowing architects and designers to exhaustively find wasted area in a design while focusing on power and performance.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Badru Agarwala of Rise Design Automation

CEO Interview: Badru Agarwala of Rise Design Automation
by Daniel Nenni on 02-14-2025 at 6:00 am

Badru Agarwala

Badru Agarwala is the CEO and Co-Founder of Rise Design Automation (RDA), an EDA startup with a mission to drive a fundamental shift-left in semiconductor design, verification, and implementation by raising abstraction beyond RTL  With over 40 years of experience in EDA, Badru served as General Manager of the CSD division at Mentor Graphics (now Siemens EDA) before founding RDA, where he spearheaded advancements in high-level design, verification, and power optimization. He has also founded multiple successful startups, including Axiom Design Automation (acquired by Mentor Graphics in 2012), Silicon Automation Systems (now Sasken Communication), and Frontline Design Automation (acquired by Avant! Corporation). His expertise and visionary leadership continue to drive innovation, shaping the future of semiconductor design and verification.

Can you tell us about your company and its mission.

Rise Design Automation (RDA) is a new EDA startup that recently emerged from stealth mode. Our mission is to drive a shift-left in semiconductor design, verification, and implementation by raising abstraction beyond RTL. This approach delivers orders-of-magnitude improvements in productivity while bridging the gap between system and silicon.

RDA’s innovative tool suite is designed for scalable adoption, enabling multi-level design and verification with high performance. By combining higher abstraction with implementation insight, we help semiconductor teams accelerate development while meeting the demands of modern chip design.

What problems are you solving/what’s keeping your customers up at night?

The semiconductor industry is experiencing unprecedented growth, driven by increasing intelligence, greater connectivity, and rising design complexity across all market segments. This increased intelligence results in more software and a growing reliance on specialized silicon accelerators to meet compute demands. However, these accelerators must be tailored to specific market requirements, making a one-size-fits-all approach impractical.

Delivering architectural innovation in silicon with predictable resources, costs, and schedules is critical for customers to achieve differentiation. However, traditional RTL design flows are time-consuming and often require multiple iterations due to late-stage issues that emerge during design and implementation. Especially with a shortage of experienced hardware designers, these inefficiencies extend development cycles and introduce risks that can impact power, performance, and area (PPA) targets. A systematic and scalable approach is essential.

Rise addresses this challenge by enabling early architectural exploration with implementation correlation. This provides early visibility into silicon design estimations and trends before committing to an architecture. By integrating front-end exploration with implementation-aware insights, teams can confidently develop innovative, verifiable, and implementable architectures at their target technology node.

By operating at a higher level of abstraction, Rise delivers a 30x to 1000x increase in verification performance over traditional RTL. This speedup enables software and hardware co-simulation very early in the design cycle, allowing teams to verify both functionality and performance in a cohesive environment. By bridging the gap between software and silicon, Rise ensures that architectural decisions are validated holistically, reducing risk and accelerating overall system development.

How has the recent “speed of light” advances in AI and generative AI helped what RDA delivers to customers?

The semiconductor industry has increasingly adopted AI across a range of applications to enhance tool and user productivity, efficiency and results. However, the use of generative AI for RTL code has been met with caution, partly due to concerns about training data, verification, and reliability. As design complexity increases, AI-driven hardware design is becoming essential for reducing costs, improving accessibility, and accelerating innovation while ensuring high-quality, verifiable results.

Rise addresses this challenge by raising design abstraction and applying AI with domain expertise to transform natural-language intent into human-readable, modifiable, and verifiable high-level design code. This reduces manual effort, shortens learning curves, and minimizes late-stage surprises. Leveraging lightweight, deployable models built on pretrained large language models, Rise delivers a shift-left approach at higher abstractions in SystemVerilog, C++, and SystemC.

Rather than relying solely on AI for Quality of Results (QoR),  Rise augments human expertise with a high-level toolchain for design, verification, debug, and architectural exploration. This synergy between AI and the Rise toolchain delivers optimized RTL code and unlocks significant productivity gains, while ensuring that AI-driven EDA remains practical, verifiable, and implementable.

Additionally, our AI capabilities continue to evolve. We recently integrated AI into our Design Space Exploration (DSE), enabling intelligent, goal-driven optimizations with analysis feedback, instead of manual parameter sweeps. This AI-enhanced approach changes architectural exploration from random searching to finding the right architecture quickly.

There have been many attempts and tools to raise abstraction in semiconductor design, why is Rise different?

The EDA market has seen many efforts to raise design abstraction, yet higher-level design tools often lag in innovation. Rise takes a fundamentally different approach, building a new architecture from the ground up with several key advantages.

First, Rise is language- and abstraction-agnostic, supporting the most suitable language for each task. While existing solutions rely on C++ and SystemC, Rise adds untimed and loosely timed SystemVerilog support, easing adoption for engineers in established workflows. Its open and flexible architecture also allows seamless integration of new languages and tools. This native multi-level, multi-language support enables designers to analyze and debug at the same abstraction level in which they design.

Second, Rise delivers 10x–100x faster synthesis and exploration while maintaining predictable, high-quality RTL. This is critical for true architectural exploration, allowing teams to make informed decisions with immediate feedback.

Third, verification is deeply integrated into the Rise architecture. Automated verification methods, reusable components, and adaptable interfaces enable seamless connection with industry best practices, facilitating complete block-to-system verification with minimal effort.

Finally, we have developed a unique generative AI solution for high-level design that is tightly integrated into the Rise toolchain, as discussed in detail earlier.

Which type of markets and users do you target?

We focus on companies developing new designs, new IP blocks, and new silicon. Our solution is particularly valuable for teams engaged in architectural innovation and performance optimization, where early decisions significantly impact final silicon quality.

We see two types of users with Rise. The first consists of traditional RTL and production design teams, who are cautious in adopting new methodologies due to the high cost of failure. For these teams, maintaining high-quality QoR, a short learning curve, and comprehensive verification alongside architectural exploration is essential. The additional support of SystemVerilog and plug-in of existing EDA tools helps ease adoption and mitigate risk.

The second group includes researchers, architects, and HW/SW teams focused on early-stage exploration and software-hardware co-design. Rise tools serve this market by providing high-performance simulation and synthesis, enabling teams to efficiently explore trends and validate design choices. By integrating with high-speed, open-source implementation tools, our solution facilitates rapid iteration on architectural decisions, delivering key implementation insights and performance metrics for systems executing both hardware and software.

How do customers normally engage with your company?

We offer multiple ways for customers to engage with our products and team. The process typically begins with a discussion, presentation, or product demonstration, where we collaborate to determine the best next steps based on their needs.

To learn more, customers can visit our website rise-da.com, where we provide on-demand webinars, product videos, and additional resources. They can also contact us directly via email at info@rise-da.com .

To get the latest updates you can follow us on LinkedIn (RDA LinkedIn Page)and I personally welcome direct connections via LinkedIn (Badru Agarwala) or email at badru@rise-da.com.

Also Read:

CEO Interview: Mouna Elkhatib of AONDevices

CEO Interview: With Fabrizio Del Maffeo of Axelera AI

2025 Outlook with Dr Josep Montanyà of Nanusens


Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs

Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs
by Daniel Nenni on 02-13-2025 at 10:00 am

image002 (2)

The growing demand for high-performance AI applications continues to drive innovation in CPU architecture design. As machine learning workloads, particularly convolutional neural networks (CNNs), become more computationally intensive, architects face the challenge of delivering performance improvements while maintaining efficiency and flexibility. Our upcoming webinar unveils a cutting-edge solution—a novel architecture that introduces advanced matrix extensions and custom quantization instructions tailored for RISC-V CPUs, setting a new benchmark for CNN acceleration.

See the Replay Here!

Breaking New Ground with Scalable and Portable Design

At the heart of this innovation lies the development of scalable, VLEN-agnostic matrix multiplication/accumulation instructions. These instructions are carefully designed to maintain consistent performance across varying vector lengths, ensuring portability across different hardware configurations. By targeting both computational capacity and memory efficiency, the architecture achieves significant improvements in compute intensity while reducing memory bandwidth demands.

This scalability makes it an ideal solution for hardware vendors and system architects looking to optimize their CNN workloads without being locked into specific hardware constraints. Whether you are working with smaller, embedded systems or high-performance data center environments, this design ensures robust and adaptable performance gains.

Advanced Memory Management and Efficiency Enhancements

To further elevate performance, the architecture introduces a 2D load/store unit (LSU) that optimizes matrix tiling. This innovation significantly reduces memory access overhead by efficiently handling matrix data during computations. Additionally, Zero-Overhead Boundary handling ensures minimal user configuration cycles, simplifying the process for developers while maximizing resource utilization.

These advancements collectively deliver smoother and faster CNN processing, enhancing both usability and computational efficiency. This improved memory management directly contributes to the architecture’s superior compute intensity metrics, which reach up to an impressive 9.6 for VLEN 512 configurations.

Accelerating CNNs with New Quantization Instructions

A key highlight of this architecture is the introduction of a custom quantization instruction, designed to further enhance CNN computational speed and efficiency. This instruction streamlines data processing in quantized neural networks, reducing latency and power consumption while maintaining accuracy. The result is a marked improvement in CNN performance, with acceleration demonstrated in both GeMM and CNN-specific workloads.

Preliminary results reveal that kernel loop MAC utilization exceeds 75%, a testament to the architecture’s capability to maximize processing power and efficiency. These metrics are bolstered by sophisticated software unrolling techniques, which optimize data flow and computation patterns to push performance even further.

Join Us to Explore the Future of RISC-V AI Performance

This breakthrough architecture showcases the vast potential of RISC-V CPUs in tackling today’s AI challenges. By integrating novel matrix extensions, custom instructions, and advanced memory management strategies, it delivers a future-ready platform for CNN acceleration.

Whether you’re a hardware designer, software developer, or AI engineer, this webinar offers invaluable insights into how you can leverage this new architecture to revolutionize your CNN applications. Don’t miss this opportunity to stay ahead of the curve in AI processing innovation.

See the Replay Here!

Andes Technology Corporation

After 16 years effort starting from scratch, Andes Technology Corporation is now a leading embedded processor intellectual property supplier in the world. We devote ourselves in developing high-performance/low-power 32/64 bit processors and their associated SoC platforms to serve the rapidly growing embedded system applications worldwide.

Also Read:

Relationships with IP Vendors

Changing RISC-V Verification Requirements, Standardization, Infrastructure

The RISC-V and Open-Source Functional Verification Challenge


An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2

An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2
by Jonah McLeod on 02-13-2025 at 6:00 am

PastedGraphic 1

As RISC-V gains traction in the global semiconductor industry, developers are exploring fully open-source approaches to processor design. XiangShan, a high-performance RISC-V CPU project, combined with the Mulan Permissive License v2 (Mulan PSL v2), represents a community-driven, transparent alternative to proprietary chip development models. Unlike traditional IP licensing models, where companies purchase pre-configured processor cores, XiangShan allows full access to RTL (register-transfer level) source code, enabling deep hardware customization. With the support of MinJie, an agile open-source development platform, and UnityChip, an open-source verification framework, XiangShan provides a flexible and scalable development path for startups, research institutions, and semiconductor companies looking to build custom RISC-V chips.

This article explores the design process of developing a RISC-V chip using XiangShan, highlighting the advantages, challenges, and impact of an open-source development approach.

Key Design Elements of the Chip

The latest generation of XiangShan, known as Kunminghu (Gen3), is designed to deliver high-performance computing capabilities, making it a viable alternative to commercial RISC-V processors. It features out-of-order execution with a high-performance pipeline, support for RISC-V vector extension for AI and HPC acceleration, and scalability for different process nodes, including 7nm, 12nm, and 28nm fabrication.

To streamline the design and development process, XiangShan utilizes MinJie, an open-source development platform that integrates Chisel-based RTL development, simulation and performance profiling tools, and agile methodologies, reducing iteration time for hardware design. One of the biggest challenges in open-source hardware is ensuring reliability and security. UnityChip provides functional verification to detect architectural bugs early, security verification to test speculative execution vulnerabilities (such as Spectre-like attacks), and crowdsourced debugging tools, enabling contributions from universities and independent researchers. Together, these elements form a comprehensive, open-source RISC-V development ecosystem, fostering innovation while maintaining full transparency.

Chisel (Constructing Hardware in a Scala Embedded Language) is a high-level hardware description language (HDL) that simplifies register-transfer level (RTL) design by enabling more modular, reusable, and parameterized hardware development compared to traditional HDLs like Verilog and VHDL.

In Chisel-based RTL development, designers use Scala-based programming constructs to define digital circuits, allowing for faster prototyping, better code reusability, and easier debugging. It integrates with simulation and performance profiling tools, which help validate design correctness, optimize computational efficiency, and analyze power consumption. These tools enable pre-silicon verification, ensuring that a processor meets performance targets before fabrication.

For RISC-V processor development, Chisel-based tools streamline core design, integration of vector extension, and instruction scheduling, making them particularly useful for projects like XiangShan, which require high customization and an agile development cycle.

Development Process and Challenges

The development of a custom RISC-V chip using XiangShan and Mulan PSL v2 follows a structured but highly customizable approach. Developers begin by selecting and customizing the processor core. They start with Kunminghu (Gen3), choosing features such as vector extension and cache configurations. Since the RTL is fully open-source, modifications can be made at a deep architectural level. Unlike proprietary IP cores, developers have complete control over performance tuning, instruction scheduling, and power efficiency.

Once the processor core is selected, RTL design and simulation take place using MinJie, which provides a modular, Chisel-based design flow that enables rapid prototyping. The high-level hardware description language allows flexible modifications while maintaining efficiency. Developers conduct pre-silicon simulations, optimizing logic before physical design.

Verification and testing are performed using UnityChip, which integrates multiple verification methodologies to ensure robust functionality. Security analysis is conducted to prevent speculative execution attacks and cache vulnerabilities. The verification framework also enables collaborative debugging, allowing research institutions and independent developers to contribute to improving the design.

The final step in the process is fabrication. XiangShan cores are designed to be scalable across multiple process nodes, including 7nm, 12nm, and 28nm. Developers can choose local or international fabs such as TSMC, SMIC, or GlobalFoundries based on cost and geopolitical considerations. The Mulan PSL v2 license ensures that there are no commercial restrictions, making it easier to integrate into commercial silicon products.

Table. XiangShan Versus Proprietary Commercial RISC-V IP

Aspect XiangShan (Mulan PSL v2) Proprietary RISC-V IP (Commercial Vendors)
Licensing Fully open-source Requires paid IP license
Customization Full RTL access, high flexibility Limited customization, pre-configured cores
Development Tools MinJie (open-source agile development) Proprietary toolchains
Verification UnityChip (community-driven verification) Vendor-provided, closed testing
Security Testing Open security analysis Limited transparency
Manufacturing Freedom Fabrication at any foundry Some IPs are restricted to certain fabs
Cost Free (no licensing fees) License fees required
The Open-Source Advantage: Why Choose XiangShan?

XiangShan provides a unique advantage over proprietary RISC-V IP by offering full RTL access and high flexibility, unlike commercial vendors that limit customization through pre-configured cores. Development tools such as MinJie enable agile, open-source development, while proprietary solutions rely on vendor-specific toolchains. Verification is performed through UnityChip, a community-driven framework that encourages open security analysis, whereas commercial IP vendors provide proprietary closed testing. Another key advantage is that XiangShan allows fabrication at any foundry, whereas some proprietary IP solutions may have restrictions on manufacturing partners. With no licensing fees, XiangShan provides a cost-effective alternative, making it ideal for academic research, AI startups, and semiconductor companies looking to fully control their chip design.

Conclusion: The Future of Open-Source RISC-V Chips

Conclusion: The Future of Open-Source RISC-V Chips

The combination of XiangShan, Mulan PSL v2, MinJie, and UnityChip provides a complete, open-source alternative to proprietary RISC-V development. This approach is highly customizable, giving developers full control over their chip’s architecture and performance. It is also cost-effective, eliminating licensing fees and enabling broader adoption in academic and startup environments. Additionally, it is scalable and secure, integrating advanced verification tools to ensure reliability and security.

With continuous community contributions and growing industry adoption, XiangShan is positioned as a leading open-source RISC-V project, pushing the boundaries of open innovation in semiconductor design.

Jonah McLeod, RISC-V Industry Analyst jonah@jonahmcleod.com

Also Read:

2025 Outlook with Volker Politz of Semidynamics

Webinar: Unlocking Next-Generation Performance for CNNs on RISC-V CPUs

Changing RISC-V Verification Requirements, Standardization, Infrastructure