SILVACO 073125 Webinar 800x100

Codasip Makes it Easier and Safer to Design Custom RISC-V Processors #61DAC

Codasip Makes it Easier and Safer to Design Custom RISC-V Processors #61DAC
by Mike Gianfagna on 07-15-2024 at 6:00 am

DAC Roundup – Codasip Makes it Easier and Safer to Design Custom RISC V Processors

RISC-V continued to be a significant force at #61DAC. There were many events that focused on its application in a wide variety of markets. As anyone who has used an embedded processor knows, the trick is how to be competitive. Using the same core as everyone else and differentiating in software is a strategy that tends to run out of gas quickly. There is simply not enough capability to differentiate in software alone. And so, customizing the processor core becomes the next step. The open-source ISA offered by RISC-V makes it a popular choice for customization. Achieving this goal is easier said than done, however. There are many moving parts to manage, and many pitfalls to be avoided. Codasip has substantial expertise in this area and a newly announced, safer and more robust approach to the problem was on display at DAC.  Let’s examine how Codasip makes it easier and safer to design custom RISC-V processors.

Codasip Company Mission

Codasip is a processor solutions company which uniquely helps developers differentiate products. It was founded in 2014, and a year later offered the first commercial RISC-V core and co-founded RISC-V International. The company’s philosophy includes the belief that processor customization is something the end user wants to control. This is the most potent way to differentiate in the market.

Achieving that result requires a holistic approach. This is accomplished through the combination of the open RISC-V ISA, Codasip Studio processor design automation, and high-quality processor IP. Codasip’s custom compute enables its customers to take control of their destiny.

What’s New – A Conversation from the Show Floor

I had the opportunity to meet with two senior executives at the Codasip DAC booth – Brett Cline, Chief Commercial Officer and Zdeněk Přikryl, Chief Technology Officer. I’ve known Brett for a long time, dating back to his days at Forte Design Systems. These two gentlemen cover the complete spectrum of all things at Codasip, so we had a far-reaching and enjoyable discussion. Along the way, we may have uncovered a way to solve most of the world’s problems, but I’ll save that for another post. Let’s focus on how Codasip makes it easier and safer to design custom RISC-V processors.

We first discussed a new version of Codasip Studio called Studio Fusion, which has a capability called Custom Bounded Instructions, or CBI. Using CBI, customers can develop any type of customization needed for their intended market, but by staying within the guidelines of CBI they can be assured the changes will not cause processor exceptions. Essentially, you can’t “break” the processor if you follow CBI.

Anyone who has developed custom instructions knows this is not the case in general and great care must be taken not to introduce subtle, hard-to-find bugs. There is substantial re-verification required. All that goes away with CBI.

We also discussed how limiting this new approach could be. It turns out the answer is “not much”. Significant customization can be accomplished with much lower development time and risk. To drive home that point, Codasip was running a live demo in its booth using a customized processor that was implemented with Codasip Studio Fusion and CBI.

The application applied AI algorithms to analyze the sound of a running cooling fan to identify anomalies in the sound that indicate potential problems. The algorithm would then predict the time to failure for the fan. If the application was cooling for critical electronics or automotive operation, the benefits are clear. After implementing and verifying the code, a custom processor was implemented with 40 unique custom instructions to enhance the performance of the algorithm.

Speed and energy efficiency showed dramatic improvements, with power reduction in the neighborhood of 80 percent. That makes the application much easier to implement in a small, low power form factor. I should also mention that doing a live demo of custom hardware at a trade show requires a lot of confidence – my experience is that all failures find a way forward while folks are watching. This made the demo more impressive in my eyes.

It was also pointed out that Codasip generates all the infrastructure to use the new custom processor, including the compiler and debugger. You get everything required. No third-party tools or support needed. This means no code changes to use the custom processor, the compiler takes care of exploiting the new features. Here, we discussed another new feature that has been added. The compiler is now more micro-architecturally aware. This means the compiler has deeper knowledge of what’s going on in the custom processor and so it can perform more sophisticated and higher-impact optimization.

After my discussion with Brett and Zdeněk it became clear how much automation Codasip is delivering to the RISC-V customization process. You truly are limited only by your imagination.

To Learn More

You can learn more about Codasip Studio Fusion here. You can also learn more about Codasip on SemiWiki here. Check out the video of the live demo from the Codasip booth and see the improvements a custom processor can deliver here. And that’s how Codasip makes it easier and safer to design custom RISC-V processors at #61DAC.


Podcast EP235: Tinier than TinyML: pushing the flexible boundaries of AI – Pragmatic Semiconductor

Podcast EP235: Tinier than TinyML: pushing the flexible boundaries of AI – Pragmatic Semiconductor
by Daniel Nenni on 07-12-2024 at 10:00 am

Dan is joined by Dr. Richard Price, CTO and Dr. Konstantinos Iordanou, a senior ASIC designer at Pragmatic Semiconductor.

Richard has over 25 years’ experience in the development and commercialisation of a wide range of new technologies based on novel processes, materials and flexible electronics. Richard is also a non-executive director at the Henry Royce Institute – the UK’s National Institute for advanced materials research. Konstantinos Iordanou is working on pioneering projects that push the limits of flexible IC technology. He holds a Ph.D. in Computer Science from The University of Manchester and specialises in computer microarchitecture, digital design, hardware accelerators and heterogeneous systems.

Dan explores the unique and disruptive technology of Pragmatic Semiconductor, a UK-based leader in flexible integrated circuit technology and semiconductor manufacturing. The company uses thin-film semiconductors to create ultra-thin, flexible integrated circuits, known as FlexICs, that are significantly lower cost and faster to produce than silicon chips – talking days, rather than months to produce.

Richard and Konstantinos discuss their groundbreaking work on tiny classifiers, in which they created world’s tiniest ML inference hardware on a flexible substrate. Uniquely, an evolutionary algorithm is used to automatically generate classification hardware. The resulting chip is extremely small in area – fewer than 300 logic gates.

When implemented on a flexible substrate, such as a FlexIC, this classifier occupies up to 75 times less area, consumes up to 75 times less power and has six times better yield than the most hardware-efficient ML baseline.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Homepage


SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC

SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC
by Robert Maire on 07-12-2024 at 6:00 am

Semicon West SF

– We just finished the most happy SEMICON West in a long time
– IMEC stole the show- HBM has more impact than size dictates
– Has Samsung lost its memory mojo? Is SK the new leader?
– AI brings new tech issues with it – TSMC is still industry King

Report from SEMICON West

The crowds at Semicon West were both big and Jubilant…. more so than we have seen in a long time (and we have been attending a long time, over 3 decades). It was a complete turn around from the 2-3 year downturn we have been in. Despite the fact that the majority of the memory industry has still not fully recovered and foundry logic is most healthy only at the bleeding edge, the euphoria generated by AI and HBM has swamped the entire industry. The icing on the AI/HBM cake is the buzz generated by the CHIPS Act and the fact that the average Joe on the street has now probably heard about chips (or semiconductors ) by now. If not due to all the news about CHIPS act money then by the minute by minute stock market buzz about NVDA.

Most of our non tech friends have no clue what NVDA does and a small minority know they make chips that have something to do with AI.

But any publicity is good publicity.

The best part of Semicon was not at Semicon

Monday afternoon, prior to the Tuesday start of Semicon, IMEC, the European semiconductor R&D consortium does a series of presentation of a number of speakers talking about various technology isues and advancements in the industry.

This years discussion was especially good talking about CMOS 2.0 and the many changes the industry is currently undergoing at the same time.

AI & HBM implementation drive many different issues and technology in the industry, more so than prior singular technology transitions.

There has been a lot of discussion but still not enough about the power requirements of AI devices. The latest Nvidia device runs 700 watts which means at 0.7 volts it uses 1000 amps of electricity, easily enough to weld thick steel or start the largest diesel engine let alone what the power demands will do to electric vehicles that require AI chips for autonomous driving. How do we package, test & supply power to these electrical beasts? What does it do to data centers and the grid?

AI and HBM are also about networking and moving data as fast as possible (as in Large Language Models). He who can move the most data, in, out and about, wins the race. This requires new connectivity, fast connectivity, parallel connectivity etc;.

Packaging will play an increasing role in both power and connectivity that dives AI and HBM. In our view, packaging (the back end of the semiconductor industry) has grown to equal importance (almost) to the front end (wafer fabrication)

My last point about IMEC is that it painfully points out that the US is woefully behind in R&D consortia that are driven by cooperation for the greater good of the industry. In the US we no longer have a single, nationwide, R&D organization. We have a number of companies out for their own benefit in competition with one another. While speakers at Semicon talked about “together” we need to do it for real if we are serious about re-shoring and re-capturing prior US greatness in semiconductors.

Has Samsung lost its MoJo to SK Hynix?

Samsung has long been the undisputed leader in semiconductor memory, with all others far behind.

It is interesting to note that SK Hynix has clearly taken the lead in the small but obviously super critical HBM segment. This should be both an embarrassment as well as wake up call for Samsung. It is further suggested that Micron may be number two in HBM after SK which should make alarm bells go off inside Samsung.

While some may dismiss this as a non issue as HBM is only about 5% (and growing quickly) of the industry it is by far the most profitable with the highest margins while pricing has still not fully recovered in the greater part of the memory industry.

We would expect both heads to roll inside Samsung as well as spending to ramp to fix this embarrassment.

Meanwhile, Samsung is not lighting the world on fire with its lackluster foundry offerings and hollow bravado. TSMC remains the far and away undisputed King of all foundry evidenced by its recent financial numbers. Obviously driven in large part by its enablement of Nvidia for which it deserves to be richly rewarded.

If anything, this makes us feel like Samsung has also fallen further behind TSMC in foundry as well….so Samsung is 0 for 2 in semicodnuctors.

CHIPS Act excitement

We heard a very rousing Keynote address coming from the under secretary of the CHIPS Act which sounded an awful lot like an advertisement than a list of actual accomplishments. The CHIPS Act is clearly a motivator and influencer but as we have previously mentioned writing checks is the easiest part, training people, getting fabs to work, developing technology is a lot harder.

We do think that the CHIPS Act can be the catalyst or spark that re-ignites the US semiconductor industry.

China is still the 800 pound gorilla of WFE spend in the industry

China is still outspending virtually everyone one else in the industry and many times that of the US spend. If the big equipment companies lost the 40% plus of their revenues that is China they would be sucking major wind. So they better keep spending the tens of millions of dollars on K Street lobbyists to keep the shipments up to enable China, and move operations and jobs to Asia.

We do think that China as an overall percentage of spend will start to decrease but primarily because other countries will start to increase their spend as we slowly make our way out of the downturn.

No major product announcements at Semicon

Tokyo Electron did finally publicly release their Ion Beam sidewall etch Epion product, which has been in the market already along with AMAT’s competing Sculpta product. Both products have taken some POR (process of record) positions at customers. We understand that the TEL product may have some cost advantages.

Semicon West Shrinkage

Actual semiconductor tools have long ago left the show floor. Major equipment makers have also exited stage left and have even reduced their off floor, in nearby hotel, presence. This has reduced the actual show to a lot of small booths of bit players selling bits and pieces like O rings and rubber gloves. Foreign representation seems to be on the rise with Korea, Germany & now Malaysia pavilions.

SEMI did announce a new SEMICON “Heartland” to take place in Indiana, announced with a guest appearance by the governor of Indiana. A second Semicon West is scheduled in October in Arizona, likely to pay homage to that states rising importance in the semiconductor industry as home to the newest fabs in the US by Intel & TSMC.

The Stocks

… have been on fire for a long streak now, reaching PE ratios not seen in forever. Whether earnings can ever catch up to rising valuations will remain to be seen. Everybody is asking is it time to take profit on NVDA? We would expect occasional downdrafts of profit taking but the overall mood and momentum is so positive its hard to imagine the positive tone changing much as business continues to improve.

There are a long of supporting and secondary plays on both AI & HBM that are yet to be discovered by the general public and probably a bunch of small caps that will see the trickle down effects of a tide that is rising exceptionally high. This will likely further support the ongoing tidal wave of valuation…..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC- Past bottom of cycle- up from here- early positive signs-packaging upside

LRCX- Mediocre, flattish, long, U shaped bottom- No recovery in sight yet-2025?

ASML- Soft revenues & Orders – But…China 49% – Memory Improving


Who Are the Next Anchor Tenants at DAC? #61DAC

Who Are the Next Anchor Tenants at DAC? #61DAC
by Mike Gianfagna on 07-11-2024 at 10:00 am

DAC Roundup – Who Are the Next Anchor Tenants at DAC?

#61DAC is evolving. The big get bigger and ultimately focus on other venues for customer outreach and branding. This is a normal evolution in any industry. For EDA, it was noticed by many that Cadence and Synopsys have downsized their booths at DAC. Everyone knows CDNLive and SNUG are very successful events for these companies and so this change shouldn’t come as a surprise. There may be other examples of this trend as the industry matures. The interesting part to focus on is who will be the next wave of anchor tenants at DAC? There are clearly some new entrants to DAC that are gaining momentum fast. The how and why of this phenomenon is interesting. I had the opportunity to speak with an executive from one such company at DAC. The conversation was both enlightening and inspirational. Let’s examine who are the next anchor tenants at DAC.

Altair Company Profile

Before getting into the profile of Altair, an observation about the focus of #61DAC is relevant. The conference tagline is now The Chips to Systems Conference. This is not a marketing slogan, it’s a statement about where the electronics industry is going. That is, electronic systems are becoming the critical enabler for a growing class of systems.

So, the question to ask as we look for the next anchor tenants at DAC is this – which companies have a broad enough footprint to enable systems with electronics? Altair is one such company and is one to watch as the next crop of DAC anchor tenants move in. You can learn about the breadth and focus of Altair on SemiWiki here. A short excerpt from the Altair website will explain a lot as well:

Changing Tomorrow, Together

When data science meets rocket science, incredible things happen. The innovation our world-changing technology enables may feel like magic to users, but it’s the time-tested result of the rigorous application of science, math, and Altair.

Our comprehensive, open-architecture simulation, artificial intelligence (AI), high-performance computing (HPC), and data analytics solutions empower organizations to build better, more efficient, more sustainable products and processes that will usher in the breakthroughs of tomorrow’s world. Welcome to the cutting edge of computational intelligence – no magic necessary.

In my opinion, this is the stuff a DAC anchor tenant should be made of.

Altair – the Backstory

Sarmad Khemmoro

I had the good fortune to spend some time with Sarmad Khemmoro at DAC. Many thanks to Dan Nenni for setting it up. Sarmad is currently the senior VP of Product & Strategy – Electronics Design & Simulation at Altair. He clearly sees the opportunity in the global electronics market in general and at DAC in particular. He has a storied career with senior technical and strategy leadership roles at companies such as Mentor, Innoveda and Viewlogic. He knows the technology behind chip design and the ecosystem that uses it.

He’s a natural fit to help Altair take a growing role in the world of electronic systems. He shared some valuable information during our meeting.

He began with a discussion of convergence. ECAD, MCAD, PLM and many other disciplines are now coming together, either through acquisition or partnership to create the required technology stack to realize tomorrow’s world-changing products. For Altair, three focus areas are silicon debug, 3DIC multi-physics – from chip to PCB to systems and job scheduling and license management. A broad footprint that is growing.

Altair has grown and will continue to grow through acquisitions. The company seems to have cracked the code for that process. It turns out many of the CEOs of acquired companies are still with Altair. That speaks volumes about the quality of the workplace and the commitment it has to its employees. We also talked about Altair’s customers – the list includes many household names.  Altair is also very strong in the automotive industry. This will be a strategic advantage as that market continues to consume more semiconductors. Sarmad is located in Detroit, so he’s up close and personal on this front.

We also discussed AI and digital twins. Altair has capabilities in both areas and Sarmad is quite familiar with the company’s strategy. The list of industries supported by Altair is quite extensive as shown in the figure below. This is a company with substantial reach.

Primary Industries Supported

Sarmad also discussed Altair’s unique and patented licensing model. The company basically re-wrote the rule book regarding tool licensing. The process is driven by something called Altair Units. Purchasing units gives users full access to all Altair software tools whenever they need them, and they can determine when, where, and how they want to use different tools without needing to worry if they’re eligible for access.

This approach removes a lot of the uncertainty and overhead associated with specific tool licensing. Altair has a long list of partners and through the Altair One™ Marketplace partner software can also be accessed with Altair Units, simplifying even more of the process.

Post-DAC Update – the Momentum Continues

Altair is clearly on the move. Its acquisition machine is in high gear with a recent announcement signaling its intention to acquire Metrics Design Automation, further expanding the company’s footprint in EDA. Metrics is a Canadian company that has developed a game-changing simulation as a service (SaaS) business model for semiconductor electronic functional simulation and design verification.

Combining the Metrics simulator with Altair’s silicon debug tools will result in a world-class, advanced simulation environment with superior simulation and debug capabilities. Note that Metrics is led by Joe Costello, who is something of a folk hero in EDA.

Tight relationships in the semiconductor ecosystem are a key attribute of any DAC anchor tenant. There was also a recent announcement that Altair has joined the Samsung Advanced Foundry Ecosystem, known as SAFE™. Altair and Samsung Electronics will combine Altair’s comprehensive EDA technology with Samsung Foundry’s manufacturing capabilities to establish a more innovative, more efficient semiconductor design and production process.

To Learn More

My conversation with Sarmad left an impact. The breadth of Altair’s tools is substantial, and the company has a vision to grow in key markets to further dominate the landscape. You can explore Altair’s capabilities for semiconductor design and EDA here. If you want to take the grand tour of all industries supported, you can do that here.

You can also see the full announcement about the Metrics acquisition here and the full Samsung SAFE announcement here.

So, the next time you wonder who are the next anchor tenants at DAC, think Altair.


AI Booming is Fueling Interface IP 17% YoY Growth

AI Booming is Fueling Interface IP 17% YoY Growth
by Eric Esteve on 07-11-2024 at 6:00 am

IF 2018 2027no$

AI explosion is clearly driving semi-industry since 2020. AI processing, based on GPU, need to be as powerful as possible, but a system will reach optimum only if it can rely on top interconnects. The various sub-system parts (memory, processor, co-processor, network) need to be connected with interface links with ever more bandwidth and lower latency: DDR5 or HBM memory controller, PCIe and CXL, 224G SerDes and so on. When you design a supercomputer, raw processing power is important, but the way you access memory, latency and network speed optimization will allow you to succeed. It’s the same with AI, that’s why interconnects protocols are becoming key.

In 2023, the semiconductor market declined, but the interface IP segment grew by 17%. Our forecast shows stronger growth for years 2024 to 2028, comparable to 20% growth in the 2020’s. AI is driving the semiconductor industry and Interconnect protocols efficiency are fueling AI performance. Virtuous cycle!

The interface IP category has moved from 18% share of all IP categories in 2017 to 28% in 2023. In 2024, we think this trend will amplify during the decade and Interface IP to grow to 38% of total (detrimental to processor IP passing from 47% in 2023 to 41% in 2028).

As usual, IPnest has made the five-year forecast (2024-2028) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet & D2D, exhibiting 5 years CAGR of resp. 19%, 23% and 22%.

It should not be surprising as all these protocols are linked with data-centric applications! If we consider that the weight of the Top 5 protocols was $1820 million in 2023, the value forecasted in 2028 will be $4390 million, or CAGR of 19%.

This forecast is based on amazing growth of data-centric applications, AI in short. Looking at TSMC revenues split by platform in 2023, HPC is clearly the driver. This has started in 2020, we expect this trend to continue up to 2028, at least.

Conclusion

Synopsys has built a strong position on every protocol -and on every application, enjoying more than 55% market share, by doing strategic acquisitions since the early 2000’s and by offering integrated solutions, PHY and Controller. We still don’t see any competitor in position of challenging the leader. Next two are Cadence and Alphawave, with market share in the 12%, far from the leader.

In 2024, we think that a major strategy change will happen during the decade. IP vendors focused on high-end IP architecture will try to develop a multi-product strategy and market ASIC, ASSP and chiplet derived from leading IP (PCIe, CXL, memory controller, SerDes…). Some have already started, like Credo, Rambus or Alphawave. Credo and Rambus already see significant revenues results on ASSP, but we will have to wait to 2025, at best, to see measurable results on chiplet.

This is the 16th version of the survey, started in 2009 when the Interface IP category market was $250 million (in 2023 $1980 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin!

IPnest predict in 2024 that the interface IP category in 2028 will be in the $4750 million range (+/- $250), and this forecast is realistic.

If you’re interested by this “Interface IP Survey” released in July 2024, just contact me:

eric.esteve@ip-nest.com .

Eric Esteve from IPnest

Also Read:

Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!

Interface IP in 2022: 22% YoY growth still data-centric driven

Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!


Will Semiconductor earnings live up to the Investor hype?

Will Semiconductor earnings live up to the Investor hype?
by Claus Aasholm on 07-10-2024 at 10:00 am

NVIDIA Fastest Growing Semiconductor company

The state of the Semiconductor Industry before the earnings season. This post will give the industry’s status before the results are revealed. We are sharing the information available.

The first Q2 swallows
A few companies with quarters not aligned to calendar quarters have reported. Nvidia was slightly ahead of expectations, and the stock price made the company the most valuable in periods of June. All of it is driven by the data centre and H100 AI sales.

Broadcom reported disappointing semiconductor revenue, only saved by AI Network and Accelerator sales to Meta and Google. Marvell painted a similar picture with everything down except the data centre business. This is not a good sign for the broader earnings season coming up. (Broadcom result)

Lastly, Micron showed 17% growth, mainly due to memory price increases, and only the storage business was growing in bits sold. Even the computer was flat in bits sold, indicating that Micron is not getting much action from Nvidia. (Micron result)

The closure of Q1
The total revenue of Semiconductor companies was flat in Q1-24 compared to the prior quarter, but the overall growth compared to Q1-23 was quite strong. 29% growth signals the industry is well into the cyclical recovery period (Long-term growth is currently at 8%).

If Nvidia’s strong growth is excluded, the growth falls to under 10% or close to the long-term level.

The exclusion of Nvidia revenue makes Foundry revenue growth very similar to the growth of Semiconductor companies, highlighting that Nvidia revenue is mostly profit, and only 15% makes a mark on Foundry revenue.

The four growth curves represent the semiconductor time machine; while imperfect, they allow a peek into the future of the Semiconductor Companies.

With zero inventory movements, the time machine works like this:

The revenue of Tools, Materials, & Foundries is a chain of events predicting the revenue of semiconductor companies. While it can be used to predict individual results of some of the largest Semiconductor companies, is works better as an overall indicator of the industry.

The negative growth of materials and the drop in foundry revenue does not suggest a strong recovery in the Q2 results and the tools revenue is not a solid longer term indication of revenue expansion.

Mean revenue results
With Nvidia’s strong performance clouding general industry insights, it is worth looking at a Box and Whiskers plot based on mean values.

This is a way of investigating industry growth with the outsized impact of the outliers. Here, it becomes obvious that not only is Nvidia driving the overall growth but also the Korean memory companies led by SK Hynix, which are currently winning HBM at Nvidia.

The mean growth for semiconductor companies compared to Q1-23 is 0.2%, indicating that the AI pocket of growth is the only action in the Semiconductor Industry in Q1.

Median growth for tool companies is positively impacted by the good performance of Chinese tool companies.

Revenue growth by Manufacturing Model
We divide Semiconductor companies into three different categories:

1) Integrated Device Manufacturers: Traditional model with fabs.

2) Fabless Semiconductor Companies: Companies exclusively using foundries

3) Mixed Manufacturing Model: Analog and power fabs with high-end digital outsourced to foundries.

The relative growth for Fabless is strong, but the impact of Nvidia accounts for most of the development. Without Nvidia, the result is 4%. The IDMs are lifted by the increase in memory pricing rather than bit growth. The mixed model companies have seen significant declines over the last two quarters.

The Inventory Situation
The inventory position for different areas of the supply chain can reveal how much of a surprise the current revenue level represents. If revenue is unfolding in line with the quarterly manufacturing plan, you would expect to see a decrease in inventory as companies try to optimise their inventory. The exception is if companies are running on low inventory, which is not the case in the current market environment (with notable exceptions for Nvidia and the company’s supply chain.

The chart shows the inventory days according to the supply chain position. As foundries and semiconductor companies have been depleting inventory compared to Q1-23, the materials companies were still struggling with the last pile-up collision.

The Q1-24 increase in inventory is driven by lower-than-expected demand from the end markets, which slams through the supply chain. This will likely continue into Q2-24 as neither foundries nor semiconductor companies invest in materials to support a potential Q2-24 revenue increase.

World Semiconductor Trade Statistics (WSTS)
WSTS just released their Semiconductor trade statistics for May, which showed another monthly increase. While this should be a good signal, there are issues with how WSTS accounts for semiconductor revenue.

WSTS only gets monthly reports from its members. The reporting is screened by a third-party accountant who shields the identity of the reporting company, so WSTS does not know who reports what, only what products were sold. As many important companies are not members, WSTS has to guess about their revenue numbers by month. This problem is growing with the revenue of Nvidia, which is not a member of WSTS and now accounts for more than 8.6B/month or more than 17% of total WSTS revenue. A year ago, it was 2.4B$/month. In addition, neither Intel, AMD, nor Broadcom are members of WSTS.

This makes WSTS numbers very unpredictable and not very useful for making predictions anymore.

TSMC update
As TSMC reports monthly revenue, it is possible to see Q2 revenue already. While it is a TSMC Record, the quarter is slightly above Q4-22 and Q4-23.

TSMC’s strong quarter suggests an uptick in market activity. It is hard to judge if this is broad-based or still AI-centric. Apart from Nvidia, TSMC will manufacture AI GPUs for Intel and AMD this quarter. This could signal that AMD and Intel are expecting meaningful AI orders. Whether this materialises is another matter entirely. Also, TSMC is winning orders from Samsung’s foundry business, which is struggling to get good yields on leading-edge nodes.

Semiconductor Operating Profits
The operating profits for Semiconductor companies compared to Q1-23 looks incredibly good with over 300% growth, while the rest of the supply chain have meager results.

As the Q1-23 view is taken from the Semiconductor cycle minimum it involves memory companies starting in negative and ending in postive which does not tell the full story. Turning the dial back to Q1-22 gives a significantly different view, where all of the supply chain operating profit growth is under water.

It is also worth noting that Nvidia is now dominating the total operation profit of the Semiconductor companies, skewing the graphs dramatically.

While we still wait for most Semiconductor companies to publish the Q2-24 result, the division between Nvidia and the rest of the industry is clear. In Q1, Nvidia accounted for more than half the semiconductor operating profit. This is likely to be the case in Q2 also.

The Stock market perspective
While we do not try and predict share prices, we do not mind comparing business development with increases in share prices.

We understand that revenue is not the only important element in a company valuation, but it is incredibly important for semiconductor companies to have revenue growth. Without revenue growth, it is difficult to make meaningfull gains in free cashflow which is more important in valuations.

We use the Philadelphia Semiconductor Index (SOXX) as a good proxy for the collective share price of semiconductors. As can be seen, in the graph below, the current share gains are not justified by a similar gain in revenue growth.

From an operating profit perspective, the increase in share price looks more justified, while is should be noted that Nvidia is driving both.

Adding a comparison from Q1-22 gives a different view, where none of the supply chain sectors have retuned to an operating profit at the level of Q1-22

Conclusion
While there is a lot of semiconductor optimism before the current earnings season, there is not a lot of evidence that there is significant revenue growth or inventory depletion that indicate a general upturn. The optimism surrounding the WSTS numbers does not point to a general upturn as they are dominated by Nvidia’s hyper growth and the increasing revenue of the memory companies due to price increases. The memory volume is not increasing.

TSMC will be reporting healthy numbers but not anything that goes through the roof. The good result will be dominated by supplies of AI products for Nvidia, Intel, AMD and Broadcom. It will be interesing to see if the Semiconductor companies can turn these products into revenue. We will have a special focus on Intel as the company will need to show results soon.

If you are an investor or another stakeholder in the Semiconductor Industry, you can gain insights from our updates as the Semiconductor companies reports Q2 results.

Also Read:

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth

Electronics Turns Positive


Production AI is Taking Off But Not Where You Think

Production AI is Taking Off But Not Where You Think
by Bernard Murphy on 07-10-2024 at 6:00 am

TinyML

AI for revolutionary business applications grabs all the headlines but real near-term growth is already happening, in consumer devices and in IoT. For good reason. These applications may be less eye-catching but are eminently practical: background noise cancellation in earbuds and hearing aids, keyword and command ID in voice control and face-ID in vision, predictive maintenance and health and fitness sensing. None of these require superhuman intelligence or revolutions in the way we work and live yet they deliver meaningful productivity/ease-of-use improvements. At the same time, they must be designed for milliwatt-level power levels and must be attractive to budget-conscious consumers and enterprises aiming to scale. Product makers in this space are already actively building and selling products for a wide range of applications and now have a common interest group (not yet standards) in the tinyML Foundation.

Requirements and Opportunity

Activity around tiny ML is clear, but it’s worth stressing that the tinyML group isn’t (yet) setting hard boundaries on how a product qualifies to be in the group. However, per Elia Shenberger (Sr. Director Biz Dev, Sensors and Audio at CEVA) one common factor is power, less that a watt for the complete device, and milliWatts for the ML function. Another common factor is ML performance, up to hundreds of Gigaops per second.

These guidelines constrain networks to be small ML models running on battery-powered devices. Transformers/GenAI are not in scope (though see the end of this blog). Common uses will be for sensor data analytics for remote deployment with infrequent maintenance, and for always-on functions such as voice and anomalous sound detection or visual wake triggers. As examples of active growth, Raspberry PI (with AI/ML) is already proving very popular in industrial applications, and ST sees TinyML as the biggest driver of the MCU market within the next 10 years.

According to ABI Research, 4 billion inference chips for tinyML devices are expected to ship annually by 2028 with a CAGR of 32%. ABI also anticipate that by 2030 75% of inference-based shipments will run on dedicated tinyML hardware rather than general purpose MCUs.

A major factor in making this happen will almost certainly be cost, both hardware and software. Today a common implementation depends on an MCU for control and feature extraction (signal processing), followed by an NPU or accelerator to run the ML model. This approach incurs a double royalty overhead and will certainly result in a larger chip area/cost. It will also promote greater complexity in managing software, AI models, and data traffic between these cores. In contrast, single-core solutions with out-of-the-box APIs, libraries, and ported models based on open model zoos are going to look increasingly appealing.

Ceva-NeuPro-Nano

Ceva is already established in the embedded inference space with their NeuPro-M family of products. Recently they extended this family by adding NeuPro-Nano to address tinyML profiles. They claim some impressive stats versus alternative solutions: 10X higher performance, 45% die area, 80% lower on-chip memory demand and 3X lower energy consumption.

The architecture allows them to run control code, feature extraction and the AI model all within the same core. That reduces the burden on the MCU, allowing a builder to go with a smaller MCU or even dispense with that core altogether (depending on application). To understand why, consider two common tinyML applications: wake-word/command extraction from voice, and environmental noise cancellation. In the first, feature extraction consumes 36% of processing time, with the balance in the AI model. In the second, feature extraction consumes 68% of processing time versus the AI model. Clearly moving these into a common core with dedicated signal processing plus an ML engine is going to outperform a platform splitting feature extraction and AI model between 2 cores.

The NeuPro-Nano neural engine to run the AI model is scalable, supporting multiple MAC configurations and ML performance is further boosted through sparsity acceleration and activation acceleration for non-linear types such as sigmoid.

Proprietary weight compression technology dispenses with need for intermediate decompression storage, handling on-the-fly decompression as needed. Which significantly reduces need for on-chip SRAM – more cost reduction.

Power management is a key component in meeting tinyML objectives. Clever sparsity management minimizes calculations with zero weights, dynamic voltage and frequency scaling (tunable per application) can significantly reduce net power, and weight sparsity acceleration also reduces energy/bandwidth communication overhead.

Finally the core is designed to work directly with standard inference frameworks – TensorFlow Lite for Microcontrollers and μTVM – and offers a tinyML Model Zoo covering voice, vision and sensing use-cased and based on open libraries, pre-trained and optimized for NeuPro-Nano.

Future proofing

Remember that point about tinyML being a collaboration rather than a standards committee? The initial aims are quite clear; however these continue to evolve at least in discussion as applications continue to evolve. Maybe the ceiling for power will be pushed up, maybe bit-widths should cover a wider range to support on-device training, maybe some level of GenAI should be supported.

Ceva is ready for that. NeuPro-Nano already supports 4-bit to 32-bit accuracies as well as native transformer computation. As the tinyML goalposts move, NeuPro-Nano can move with them.

Ceva-NeuPro-Nano is already available. You can learn more HERE.

 


Facing challenges of implementing Post-Quantum Cryptography

Facing challenges of implementing Post-Quantum Cryptography
by Don Dingee on 07-09-2024 at 10:00 am

Template Whitepaper promotion rectangulaire 1

While researchers continue a march for more powerful quantum computers, cybersecurity measures are already progressing on an aggressive timeline to avoid potential threats. The urgency is partly in anticipation of a “store-now-decrypt-later” attack where compromised data, seemingly safe under earlier generations of encryption technology, is gathered and kept until quantum computers grow powerful enough to enable future decryption. Hardware lifecycles are also on the minds of many, where chips developed using classical pre-quantum algorithms will abruptly become obsolete. Secure-IC outlines the approach needed to confront the industrial challenges of implementing Post-Quantum Cryptography (PQC) in its new white paper.

Revisiting the algorithms and planning a transition

RSA became the de facto standard in encryption technology in the late 1970s. It combines short decryption times with unreasonably long crack times thanks to long key lengths. Crack time estimates in hundreds of years were the best guess based on the computing power of the day – mainframes and mini-computers. For every measure, there is a countermeasure, and it only took two decades for Shor’s algorithm to emerge, theoretically rendering both RSA and elliptic curve cryptography vulnerable. In practice, Shor’s algorithm would need to run on a much more powerful computer to crack encryption in a reasonable time. Despite processing power advances along Moore’s law, RSA cryptography has remained safely beyond cracking.

Quantum computing changes the curve with an exponential increase in computational power as the number of qubits scales. Soon, quantum computers could offer enough operations per second to cut crack times dramatically for classical encryption methods. That should not be a surprise – classical encryption algorithms remain fixed while computing power grows yearly, which means new algorithms will be needed if encryption is to stay safe.

NIST has pursued PQC algorithms since 2016, announcing its first round of selections in July 2022. From those selections, the NSA issued its PQC recommendations in the Commercial National Security Algorithm Suite 2.0 (CNSA Suite 2.0) with timelines for modernizing six classes of systems and a target of having all systems PQC-enabled by 2033.

With the NSA’s initial software/firmware signing and cloud services goals looming in 2025, developers need to get moving with PQC technology and IP, forcing the discussion from theory to practice. Agencies in Europe – including France’s National Cybersecurity Agency (ANSSI) and Germany’s Federal Office for Information Security (BSI) – and Asia have issued similar timelines for approaching the PQC transition.

Projecting PQC theory into practical implementations

Secure-IC devotes the balance of its white paper to practical implementation challenges. High on the list is performance, particularly embedded device performance, as many more devices connect to the internet and must encrypt and decrypt traffic for security. Also on the list is hybridization, where classical and PQC algorithms exist in systems simultaneously. Another point is the existence of new cryptographic primitives in PQC and the associated concerns with design, integration, licensing, and interoperability. Their last point is certifications, where industry and regional differences complicate the landscape and usually mean addressing multiple certification efforts to field a product in various applications and markets.

In developing its PQC-ready technologies, Secure-IC created a hardware accelerator and software library that delivers a complete solution to address these challenges. Their hardware architecture manages impacts on power, performance, and area (PPA) for enabling embedded devices with PQC. Their software provides configurable modules for both classical and post-quantum algorithms. Secure-IC’s solutions have achieved several certifications, including those for the automotive industry.

To download a copy of the white paper and see how Secure-IC solutions face the challenges and help developers safeguard digital assets, please visit the Secure-IC website:

Redefining Security – Confronting the Industrial Challenges of Implementing Post-Quantum Cryptography (PQC)


Breker Brings RISC-V Verification to the Next Level #61DAC

Breker Brings RISC-V Verification to the Next Level #61DAC
by Mike Gianfagna on 07-09-2024 at 6:00 am

DAC Roundup – Breker Brings RISC V Verification to the Next Level

RISC-V is clearly gaining momentum across many applications. That was quite clear at #61DAC as well. Breker Verification Systems solves challenges across the functional verification process for large, complex semiconductors. Its Trek family of products is production-proven at many leading semiconductor companies worldwide. So, it seems logical that Breker brings RISC-V verification to the next level and that’s exactly what the company did at #61DAC.

The highlight of Breker’s presence at the show includes:

  • A complete range of tests for the entire RISC-V core verification stack from ISA to system-level interaction and performance.
  • Test Suite Synthesis AI Technology to track complex, unpredictable bugs and accelerate coverage of complex, super-scalar, out-of-order microarchitecture pipeline implementations
  • Self-checking content that is portable across simulation, emulation, and post silicon with debug and coverage analysis

Let’s look at how Breker brings RISC-V verification to the next level.

RISC-V Automated Core Verification with Synthesis Amplification

Common RISC V Verification Stack

The verification of a RISC-V processor core should include a “stack” of scenarios as shown in the figure. Breker’s RISC-V CoreAssurance SystemVIP uniquely provides this complete scenario range. A complete range of tests for the entire RISC-V core verification stack is provided. Starting with randomized instruction generation and microarchitectural scenarios, unique tests are provided that check all integrity levels, ensuring the smooth application of the core into an SoC.

This can also be extended to allow custom RISC-V instructions to be fully incorporated into the complete test suite. The capability may be ported across simulation, emulation, prototyping, post-silicon, and virtual platform environments to complete the picture.

A capability called test suite synthesis verification amplification is also included. Most test suites are templated in nature, allowing individual tests to be configured for various design situations. Using Planning Algorithms, an AI technique, Breker’s SystemVIP is based on synthesis technology that has an amplifying effect on the scenario models to significantly improve coverage and bug hunting.

Comprehensive System Coherency Verification

Breker’s popular Cache Coherency SystemVIP is used by most of the leading semiconductor companies worldwide to find hundreds of bugs over many complex SoCs. As the complexity of SoCs increases, so does the requirement for system level coherency that includes fabric and I/O, as well as advanced memory architectures.

Breker addresses these challenges with its next generation System Coherency SystemVIP, leveraging Test Suite Synthesis to generate a broad range of coherency tests. These tests are based on multiple verification algorithms and may be easily configured to operate on all memory and fabric architectures across multicore platforms. The synthesis platform includes AI planning algorithms, cross combination and concurrent scheduling for high-coverage, and complex corner-case evaluation.

As more complex RISC-V multi-cores and systems are produced, coherency for these designs is increasing in importance. Breker’s coherency SystemVIP works hand-in-hand with its other RISC-V SystemVIPs to enable a complete solution for the most advanced designs.

The SystemVIP can generate both C code and transactions for SoC testbenches, or UVM sequences for cache unit and sub-system simulation. It can operate on a virtual prototype, simulation, emulation, FPGA prototype and even actual silicon platforms, and includes full debug and profiling of the device under test on those platforms.

Breker’s Test Suite Synthesis has been shown to produce dramatic improvements in test composition time and coverage over and above basic test generators, including typical templating test schemes. The figure below provides an overview of the platform.

Platform Overview

The CEO Perspective

Dave Kelf

I had the opportunity to catch up with my good friend and CEO at Breker, Dave Kelf. I wanted to get his perspective on RISC-V market and the impact these new innovations from Breker are having. Here’s what Dave had to say:

While RISC-V represents a huge discontinuity across the electronic industry, there is a quality expectation that has been set by companies such as Arm that RISC-V cores must meet to be successful. This requires in-depth, comprehensive verification, and the best way to meet at least part of this need is to reuse test suites that are already proven.

RISC-V verification has its unique challenges, and these are compounding as the cores get more advanced. Existing, templated tests are fine for basic embedded cores, but run out of steam for the types of devices that are now emerging. We need to apply synthesis techniques to tease out deep sequential, unpredictable bugs, implement performance-based testing and enable system-level integration verification, and this accounts for the demand explosion we have seen at Breker.

To Learn More

You can learn more about RISC-V automated core verification with synthesis amplification here and you can learn more about comprehensive system coherency verification here.  And that’s how Breker brings RISC-V verification to the next level at #61DAC.


Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC
by Mike Gianfagna on 07-08-2024 at 10:00 am

DAC Roundup – Intel’s Gary Patton Shows the Way to a Systems Foundry

#61DAC was buzzing this year with talk of AI and multi-die, heterogeneous design. The promise of making 2.5/3D design and a chiplet ecosystem mainstream reality was the focus of a lot of the panels and presentations at the conference. AI is certainly a driver for this new design style, but the conversation was broader than just AI, as you will see. This new design style will require effort from every part of the semiconductor ecosystem, and this focus was on display during DAC. There is a focal point where all this work needs to come together to make it commercially available. That focal point is the foundry, and there was a keynote address on Tuesday morning at DAC that did a great job explaining how to open the door to the future. Let’s explore how Intel’s Gary Patton shows the way to a systems foundry.

What a Systems Foundry Is and Why It Matters

Before I get into Gary’s keynote, I’d like to address the elephant in the room. I’ve been in the semiconductor business for a very long time. Over the years, I’ve known Intel as a technology powerhouse that dominates markets, crushes the competition and does things the Intel Way.

Open, collaborative, ecosystem-focused and service-oriented weren’t necessarily the first things I would think of when I heard “Intel”. But that’s exactly the presentation delivered by Dr. Gary Patton during his keynote address. Intel is clearly changing, and in a big way. With its systems foundry initiative, Intel is taking a leadership role in defining the future of semiconductor design and manufacturing. This role requires a new type of culture, and Gary is one of the Intel executives that is leading way. I had a chance to speak 1:1 with Gary at DAC, and I’ll share some of his personal insights in a moment. But first, let’s look at some of the messages from his keynote.

Gary began with some eye-opening statistics. According to IDC, the world creates nearly 270,000 petabytes of data every day. That’s 270,000,000,000 gigabytes. Intel estimates that by 2030, 1 petaflop of compute and 1 petabyte of data will be less than 1 millisecond away from the average user. Enabling these achievements will require disruptive innovation – innovation that clearly goes beyond the Moore’s Law scaling we’ve come to rely upon for so long.

He also mentioned that while AI is contributing to this huge growth in data volume and data processing requirements, it also presents significant energy efficiency challenges. According to the NY Times and Google, AI could soon need as much electricity as an entire country (~100 terawatt-hours/year).

Gary pointed out that disruptive innovation is nothing new to our industry. Over the years, we’ve conquered the bipolar power limit, gate oxide limit, and now the planar device limit. Conquering this last one will require a combination of chip and chiplet implementation as well as package interconnect density and energy efficiency. Intel aims to be at the epicenter of all these innovations and that’s what its Systems Foundry initiative is all about.

Thanks to its advanced packaging work, Intel is on track to deliver a 50X improvement in energy efficiency and a 10,000X improvement in interconnect density, as shown in the figure below.

Intel Packaging Innovation

Gary looked beyond Intel’s innovations for the complete picture. He discussed the work of UCIe, a consortium of 135 companies. The stated goal of this effort is to develop an open specification that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level. Gary explained that the work of UCIe is delivering two orders of magnitude improvement in energy efficiency and three to four orders of magnitude improvement in bandwidth when compared with the standard package in the lower left of the figure above. These packaging improvements also deliver at least one order of magnitude lower latency than external interconnects like PCIe, Ethernet etc.  This is important work that Intel Foundry is clearly supporting.

Gary then discussed the importance of system technology co-optimization, a much broader and more ambitious version of design technology co-optimization. He explained that software & architecture, packaging, and silicon are all part of this effort which must be holistic. He stated that, “progress at individual layers in the stack is necessary but not sufficient. The entire system must be co-optimized. “

While much of the advanced process and packaging work at Intel is fueling this effort, close collaboration with the entire IP, EDA, design services, and advanced system assembly and test ecosystem is also critical for success. He described in detail the many programs that Intel Foundry has underway with its ecosystem partners to build and certify next-generation design and manufacturing capabilities.  He described regular meetings with all key EDA suppliers and showed very detailed scorecards of EDA certifications across all key Intel technologies. The breadth of this effort is truly impressive.  Coming a bit later in this post is more proof of Intel’s commitment to an open design flow.

Gary described the five-year investment Intel has made to deliver a systems foundry capability. He reported that today the company has over 100 2.5D designs in manufacturing. Design enablement, an open and collaborative attitude with a quality-first culture and strong customer support and certified methodologies are all part of this investment as shown in the figure below.

Intel Investment

The chart above really drove it home for me. This is very much a new and improved version of Intel. One that maintains its technology strengths but adds all the elements of a leading, world-class foundry to create a systems foundry. Next, let’s get to know the presenter of the keynote.

Leading Change – Gary Patton’s Perspective

Gary Patton

I was fortunate to have some private time with Gary after his keynote at DAC. Gary is one of the many “outsiders” that Intel has hired over the past few years – that five-year investment that is summarized above. I believe Gary’s entire career prepared him for his current work at Intel. After receiving his Ph.D. in EE from Stanford, he spent over 25 years in various leadership roles at IBM, in research, microelectronics and various corporate initiatives and product lines. Throughout this time, he honed his skills in product/technology development as well as ecosystem collaboration.

He then spent 4.5 years at Globalfoundries as chief technology officer and senior vice president of Worldwide R&D and Design Enablement. He has now been at Intel for 4.5 years as corporate vice president and general manager, Foundry Design Enablement. He is one of the many recent hires at Intel who bring broad industry experience to the company. 

Gary explained that he has always had a great respect for the accomplishments of Intel. He came to the company not to “fix” anything, but rather to take a great company to the next level. It seems to have worked out well. He credits the past 4.5 years as the best time in his career. When you consider all the things he’s accomplished, that’s saying a lot.

Gary talked about a corporate-wide shift at Intel to address the broader challenges and opportunities ahead.  Tone at the top is an important part of this and Pat Gelsinger is exactly the right person to convey those messages. Gary is delightful to speak with. He is articulate, personable and a very effective leader. A closing comment he made sticks with me. He explained that he brought many lessons learned to Intel from his prior experiences. A key one is that, “if you’re in the foundry business, your customers will make you better.”

Proof of Intel’s Commitment to An Open Design Flow

On the first day of DAC there was more proof of Intel’s growing ecosystem and the commitment being made to create a broad set of reference flows. The following announcements were made by Intel ecosystem partners to support access to Intel’s EMIB technology:

  • Ansys is collaborating with Intel Foundry to deliver signoff verification of thermal and power integrity and mechanical reliability of Intel’s EMIB technology spanning advanced silicon process nodes to various heterogenous packaging platforms.
  • Cadence announced the availability of a complete EMIB 2.5D packaging flow, digital and custom/analog flows for Intel 18A, and design IP for Intel 18A.
  • Siemens announced the availability of an EMIB reference flow for Intel Foundry’s customers. This is in addition to their announcement of Solido™ Simulation Suite certification for custom IC verification on Intel 16, Intel 3, and Intel 18A nodes.
  • Synopsys announced the availability of its AI-driven multi-die reference flow for Intel Foundry’s EMIB advanced packaging technology, accelerating the development of multi-die designs.

Suk Lee, vice president for Ecosystem Development at Intel Foundry commented, “today’s news shows how Intel Foundry continues to combine the best of Intel with the best of our ecosystem to help our customers realize their AI systems ambitions.”

You can see the complete announcement from Intel Foundry here. You can learn more about Intel’s plans to deliver a systems foundry for the AI era here.  And that’s some backstory about how Intel’s Gary Patton shows the way to a systems foundry.  #61DAC