Banner 800x100 0810

What’s all the Noise in the AI Basement?

What’s all the Noise in the AI Basement?
by Claus Aasholm on 06-05-2024 at 8:00 am

Nvidia is HUGE 2024

My dog yawns every time I say Semiconductor or Semiconductor Supply Chain. Most clients say, “Yawn…. Don’t pontificate – pick the Nasdaq winners for us!”

Will Nvidia be overtaken by the new AI players?

If you follow along with me, you might gain some insights into what is happening in AI hardware. I will leave others to do the technology reviews and focus on what the supply chain is whispering to us.

The AI processing market is young and dynamic, with several key players, each with unique strengths and growth trajectories. Usually, the supply chain has time to adapt to new trends, but the AI revolution has been fast.

Nvidia’s data centre business is growing insanely, with very high margins. It will be transforming its business from H100 to Blackwell during this quarter, which could push margins even higher. AMD’s business is also growing, although at a lower rate. Intel is all talk, even though expectations for Gaudi 3 are high.

All the large cloud providers are fighting to get Nvidia’s AI systems, but they find them too expensive, so they work on developing their chips. Outside the large companies, many new players are popping up like weeds.

One moment, Nvidia is invisible; the next moment, they will lose because of their software.. or their hardware… or …

The only absolute is that everything is absolutely rotating around Nvidia, and everybody has an opinion about how long the Nvidia rule will last. A trip into the AI basement might reveal some more insights that can help predict what will happen in the future of AI hardware.

The Semiconductor Time Machine

The Semiconductor industry boasts an intricate global supply chain with several circular dependencies that are as fascinating (yawn) as they are complex. Consider this: semiconductor tools require highly advanced chips to manufacture even more advanced chips, and Semiconductor fabs, which are now building chips for AI systems, need AI systems to function. It’s a web of interdependencies that keeps the industry buzzing.

You have heard: “It all starts with a grain of sand.” That is not the case – it starts with an incredibly advanced machine from the university cities of Northern Europe. That is not the case either. It begins with a highly accurate mirror from Germany. You get the point now. Materials and equipment propagate and circulate until chips exit the fabs, get mounted into AI server systems that come alive and observe the supply chain (I am confident you will check this article with your favourite LLM).

The timeline from when a tool is made until it starts producing chips can be extended. In the best case, it can take a few quarters to a few years. This extended timeline allows observation. It is possible to see materials and subsystems propagate through the chain and make predictions of what will happen.

Although these observations do not always provide accurate answers, they are an excellent tool for verifying assumptions and adding insight to the decision-making process.

The challenge is that the supply chain and the observational model are ever-changing. Although this allows me to continue feeding my dog, a new model must be applied every quarter.

The Swan and the ugly ducklings

I might lose a couple of customers here, but there is an insane difference between Nvidia and its nearest contenders. The ugly ducklings all have a chance of becoming Swans, but not in the next couple of years.

The latest scorecard of processing revenue can be seen below. This is a market view not including internally manufactured chips that are not traded externally:

This view is annoying for AMD and Broadcom but lethal for Intel. Intel can no longer finance its strategy through retained earnings and must engage with the investor community to obtain new financing. Intel is no longer the master of its destiny.

The Bad Bunch

These are some of Nvidia’s critical customers and other data centre owners who are hesitant to accept the new ruler of the AI universe and have started to make in-house architectures.

Nvidia’s four largest customers each have architectures in progress or production in different stages:

  1. Google Tensor
  2. Amazon Inferentium and Trainium
  3. Microsoft Maia
  4. Meta MTIA

A short overview of the timing of these architectures can be seen here

Unlike established chip manufacturers, Google only has real manufacturing traction with the TPU architecture. This research shows that there are rumours that it is more than ordinary traction.

Let’s go and buy some semiconductor capacity.

As the semiconductor technology needed for GPU silicon is incredibly advanced, it is evident that all the new players will have to buy semiconductor capacity. An excellent place to start is TSMC of Taiwan. Later, TSMC will be joined by Samsung and Intel, but for now, TSMC is the only show in town. Intel is talking a lot about AI and becoming a foundry, but the sad truth is that they currently get 30% of their chips made outside, and it will take some time before that has changed. Even when Intel gets the manufacturing capacity, they still need customers to switch which is not an easy or cheap task. With new ASML equipment in Ireland and Israel, they are like the first intel locations to go online.

The problem for the new players is that access to advanced semiconductor capacity is a strategic game based on long-term alliances. It is not like buying potato chips.

TSMC’s most important alliances

The best way to understand TSMC’s customer relationships is through revenue by technology.

TSMC’s most crucial alliance has been with Apple. As Apple moved from dependence on Intel to reliance on its home-cooked chips, the coalition grew to the point that Apple is the only customer with access to 3nm, TSMC’s most advanced process. This will change as TSMC introduces a 2nm technology, which Apple will try to monopolise again. You can rightfully taunt the consumer giant for not being sufficiently innovative or having lost the AI transition, but the most advanced chips ever made can only be found in Apple products, and that is not about to change soon.

As a side note, it is interesting to see that the $8.7B/Qtr High Performance Computer division is fuelling the combined revenue of the datacenter business approaching $25B and all of the MAC production of 7.5B$ plus some other stuff. TSMC is not capturing as much of the value as the customers are.

Nvidia and TSMC

The relationship between Nvidia and TSMC is also very strong and if Nvidia is not already TSMC’s most important customer, it will happen very soon. The prospects of Nvidia’s business is higher than those of Apple’s business.

Both the Apple and Nvidia relationships with TSMC are at the C-level as they are of strategic importance for all the companies. It is not a coincidence that you see selfies of Jensen Huang and Morris Chang eating street food together in Taiwan.

Like Apple has owned the 3nm process at, Nvidia has owned the 4nm process. Although Samsung is trying to attract Nvidia, it is not likely to be successful as there are other attractions to the TSMC relations as we will dive into later.

TSMC and the rest

With a long history and good outlook, the AMD relationship is also strong, while the dealings with Intel are slightly more interesting. TSMC has a clear strategy of not competing with their customers, which Intel certainly will when Intel Foundry Services becomes more than a wet dream. Intel gets 30% of its chips made externally, and although the company does not disclose where it is not hard to guess. TSMC manufactures for Intel until they are sufficiently strong to compete with TSMC. Although TSMC is not worried about the competition with Intel, I am sure they will keep some distance, and Intel is not first on TSMC’s dance card.

The Bad Bunch is also on TSMC’s customer list, but not with the same traction as the Semiconductor companies. However, they will not have a strong position if Foundry capacity becomes a concern.

As Apple moves to 2nm, it will release capacity at 3nm. However, this capacity is still unknown, with revenue around a modest $2B/Q, and needs to be expanded dramatically to cover all of the new architectures that plan to move into the 3nm hotel. Four companies are committed to 3nm, but the rest will likely follow shortly.

TSMC expects the 2024 3nm capacity to be 3x the 2023 capacity. Right now, there is sufficient capacity at TSMC, but that can change fast. Even though Intel and Samsung lurk in the background, they do not have much traction yet. Samsung has secured Qualcomm for its 2nm process, and Intel has won Microsoft. It is unclear if this includes the Maia AI processors.

TSMC’s investments

TSMC is constantly expanding its capacity, so much so that it can be hard to determine whether it is sufficient to fuel the AI revolution.

These are TSMC’s current activities. Apart from showing that Taiwan’s Chip supremacy will last a few years, they also show that the new 2nm technology needed to relieve 3nm technology is over a year away.

There are other ways of expanding capacity. It can be extended by adding more or faster production lines to existing fabs. A dive into another part of the supply chain can help understand if TSMC is adding capacity.

The combined tool sales are down, mostly in TSMC’s home base, Taiwan, and the other expansion area for 2nm, USA. This matches TSMC’s CapEx to Revenue spend (how much of revenue is spent on Capital Investments—new tools & factories).

Although TSMC is adding a lot of capacity, it might be too late to allow all the new players to get the capacity they need for their expansion. The low tool sales in Taiwan suggest that short-term capacity is not on the TSMC agenda; rather, the company is focusing on the Chips Act-driven expansion in the USA, which will delay capacity.

Samsung is not attracting attention to its foundry business, and Intel is some time away from making a difference. Even though the long-term outlook is good, there are good reasons to fear that there are not enough investments in short-term expansion of leading edge Semiconductor capacity at the moment.

A shortage can seriously impact the new players in AI hardware.

The current capacity limitation

It is not silicon that is limiting Nvidia’s revenue at the moment. It is the capacity of the High-Bandwidth Memory and the advanced packaging needed in the new AI servers.

Electrons and distance are not friends

The simplest way of describing this is that electrons and distance are not friends. If you want high speed, you need to get the processors close to each other and close to a lot of high-bandwidth memory. To do so, semiconductor companies are introducing new ways of packaging the GPUs.

The traditional way is to place dies on a substrate and wire them together (2D), but this is not sufficiently close for AI applications. They are currently using 2.5D technology, where stacks of memory are mounted beside the GPU and communicate through an interposer.

Nvidia is planning to go full 3D with its next-generation processor, which will have memory on top of the GPU.

Well, as my boss used to say, ” That sounds simple—now go do it!” The packaging companies have as many excuses as I have.

Besides having to flip and glue tiny dies together and pray for it to work, DRAM must be extremely close to the oven – the GPU.

“DRAM hates heat. It starts to forget stuff about 85°C, and is fully absent-minded about 125°C.”

Marc Greenberg, Group Director, Cadence

This is why you also hear about the liquid cooling of Nvidia’s new Blackwell.

The bottom line is that this technology is extremely limited presently. Only TSMC is capable of implementing it (CoWoS—Chip-on-Wafer-on-Substrate in TSMC terminology).

This is no surprise to Nvidia, which has taken the opportunity to book 50% of TSMC’s CoWoS capacity for the next 3 (three?) years in advance.

Current AI supply chain insights

Investigating the supply chain has allowed us to peek into the future, as far as 2029, when the last of the planned TSMC fabs goes into production. The focus has been on the near term until the end of 2025, and this is what I base my conclusion on (should anybody be left in the audience). Feel free to draw a different conclusion based on the facts presented in this article:

  • Nvidia is the only show in town and will continue to be so for the foreseeable future.
  • Nvidia is protected by its powerful supplier relations, built over many years.
  • AMD will do well but lacks scale. Intel.. well.. it will take time and money (they don’t have – If they pull it off, they will be a big winner)
  • The bad bunch like Nvidia systems but less so the pricing, so they are trying to introduce home cooked chips.
  • The current structure of the AI supply chain will make it very difficult for the bad bunch to scale their chip volumes to a meaningful level.
  • The CoWoS capacity is Nvidia’s Joker – 3 years of capacity ensured, and they can outbid anybody else for additional capacity.

Disclaimer: I am a consultant working with business data on the Semiconductor Supply Chain. I own shares in every company mentioned and have had them for many years. I don’t day trade and don’t make stock recommendations. However, Investment banks are among my clients, using my data and analysis as the basis for trades and recommendations.

Also Read:

Ncredible Nvidia

Tools for Chips and Dips an Overview of the Semiconductor Tools Market

Oops, we did it again! Memory Companies Investment Strategy

Nvidia Sells while Intel Tells


Arm Client 2024 Growing into AI Phones and AI PCs

Arm Client 2024 Growing into AI Phones and AI PCs
by Bernard Murphy on 06-05-2024 at 6:00 am

AI phone

I wrote last year about the challenge Arm Client/Mobile faces in growing in a saturated phone market and how they put a big focus on mobile gaming to stimulate growth. The gaming direction continues but this year they have added (of course) an AI focus, not just to mobile but also to other clients, notably PCs. It would be easy to be cynical about this direction but there are now indications (see below) that AI in client applications is moving beyond promises and is translating into real products. While there is undeniable debate around risks of AI in personal devices, I believe real value with safety will ultimately win out over both hype and apocalyptic claims. Given existing momentum, product builders must be in the game to have a chance of reaping those benefits.

What’s new in Arm Client 2024

Think AI at the edge requires a dedicated AI accelerator? Think again; according to Chris Bergey (SVP and GM for the Arm Client line of business), 70% of Android 3rd party ML workloads run on Arm CPUs with no plan to move elsewhere. In support of this preference Arm continues to advance CPU and GPU platforms, this year introducing CSS – compute subsystem wrapping CPU and GPU cores, optimized and hardened now down to 3nm processes.

At the core IP level, Cortex-X925 delivers a 36% performance uplift for single-threaded processes, 41% uplift in AI (time to first token for tiny-Llama), while the Immortalis-G925 GPU offers 37% better performance over a range of graphics tasks and 34% improvement in inference performance over a wide set of AI and ML networks. Raytracing, first introduced in 2022, now delivers 52% improved performance on complex objects. For power saving architectures, Cortex-A725 provides 35% improved power efficiency over Cortex-A720 and the “LITTLE” CPU, Cortex-A520 has been further optimized for 15% improved power efficiency. Meanwhile Arm is showing 30% improvement in GPU power for games like Fortnite.

The Arm Client LOB have also introduced new software libraries to squeeze further application performance from these CSS-based designs. The first such libraries are Kleidi CV for computer vision and Kleidi AI for AI applications, exploiting Arm’s SVE2 and SME2 extensions. Based on a little digging around, Kleidi CV offers support for saturation arithmetic, color conversions, matrix transforms, image filters and resize with interpolation. Details are sparser on Kleidi AI but what I can find suggests support for what they call micro-kernels which allow say optimized matrix multiplications to be split into different threads across an output tensor. I think the key takeaway here is that for CSS implementations, just as hardware can be maximally optimized, low-level software functions (say for ONNX) can also be maximally optimized, which is what the Kleidi libraries aim to offer especially in signal processing and AI operations.

Enabling AI phones and PCs

Good story but where’s the market demand? Before I get to AI, Arm has been co-optimizing for Android with Google for improved performance in Chrome, also rippling through to handset OEM browsers, YouTube performance and lower power. Apparently, this collaborative effort is paying off as reflected in a trend back to OEMs building on the Google distribution rather than their own Android variants. (On an unrelated note, did you know that YouTube now ranks as the most popular streaming service on TVs? Food for thought in growth potential for Google and Arm.) Another example of Arm widening the moat, here again through CSS and collaborative development around a market they already dominate.

A downside for AI and CV is increasing complexity in corresponding pipelines and stacks. Proprietary ISPs and AI accelerators are appealing for added performance of course, but if a standard platform can be tuned to offer enough performance at low enough power, I can equally see a cost and time to market case for sticking to hardware platform evolution rather than revolution.

For example, the new Samsung Galaxy AI provides real-time multi language translation among other AI-based features, building on top of Google Gemini. Other phone OEMs like Oppo, Vivo and Xiaomi are introducing their own AI assistants and LLMs in search of differentiation. All sitting on Arm processing.

On the AI-enabled PC front, I’ll first refer you back to my write up on the Cristiano Amon (CEO of Qualcomm) chat at Cadence Live 2024. There he made a big deal about the convergence between phone and PCs and particularly the opportunity to reinvent the PC and the PC market through the Qualcomm Snapdragon X-Elite processor. This processor has already been barnstorming the automotive industry; if you buy a new car, chances are high that the chip behind your infotainment system is Snapdragon X-Elite. Now it’s also taking off in laptops. You can buy such a laptop from Lenovo, Samsung, Dell, HP, Microsoft and ASUS and perhaps others. Given Microsoft support for Arm-based platforms I’m sure other semiconductor systems players are looking hard at this opportunity.

Speaking of Microsoft, Satya Nadella was recently interviewed by the Wall Street Journal and was very excited. He sees AI CoPilot+ PCs besting Macs which is quite a statement given that laptops in general have had little new to offer for quite a long time. He says the new Surface platforms are 58% faster than the M3 MacBook Air and have 20% better battery life. Together with lots of opportunities to AI-enable all sorts of apps locally on the PC. (To be clear, he is talking about at least some of the AI happening on the PC, not in the cloud.) Satya name-dropped Arm multiple times during this interview, so yes, if this AI PC transition is real and market-changing, Arm Client products will also benefit from that transition.

Exciting ideas and trends. Much still to prove of course, not least around safety/privacy implications. As an inveterate optimist I believe issues will shakeout over time and useful/beneficial innovations will survive. You can read the Arm release HERE.


Is it time for PCB auto-routing yet?

Is it time for PCB auto-routing yet?
by Daniel Payne on 06-04-2024 at 10:00 am

PCB routing min

PCB designers have been using manual routing for decades now, so when is it time to consider adding interactive routing technologies to become more productive? Manually routing traces to connect components will take time from a skilled team member and involves human judgement that will introduce errors. When a design change comes in, then iterating the PCB layout manually can be a slow process, leading to project delays. Growing complexity in PCB designs caused by higher component density, and boards using multiple layers really challenge manual routing approaches. Finding PCB designers with expertise to complete manual routing efficiently can be another burden.

PCB automation tools can quickly connect components, which saves designers time and effort. A designer can then spend their time on other challenges, like signal integrity validation, lowering interference and meeting design constraints. An automated tool will produce consistent results, lowering errors. With automation the trace widths are more uniform, clearances are maintained, preventing the need for revisions. Even when a design change arrives, the automated routing approach tackles the task quickly. The learning curve for automation tools is brief, so payback happens quickly. Automation improvements depend on design complexity, routing automation quality, designer skills and project requirements.

The industry isn’t at the level of full automation for PCB routing, so don’t expect a push-button flow for all designs. A PCB design may have unique and complex design constraints and requirements that would make an autorouter ineffective. Component placement, signal integrity (SI), thermal management, and EMI/EMC compliance, can all require manual routing. Analog components and high-speed signals may demand manual routing to achieve the precision required. Some autorouters introduce errors, causing manual rework and unforeseen delays. Learning a new autorouter can require time to become proficient, especially if the tool has too many arcane settings. Achieving a specific PCB layout style may not be possible using an automated tool. Some combination of manual and automated routing will typically yield the best results.

PCB designers are still required to solve issues like the design concept and specification, selecting the proper components, performing signal integrity analysis and optimizing high-speed designs, routing the most critical signals, managing the thermal goals, complying with EMI/EMC requirements. Humans are also best suited to make complex trade-offs between cost, performance, size, and manufacturing. Yes, automation tools will speed up parts of a PCB project, while manual methods will remain for the more abstract tasks.

PADS Professional

Siemens has created a sketch path feature in the PADS Professional tool that enables a PCB designer to achieve high design quality and high completion rates in less time compared to manual routing. Users can route individual traces or hundreds of single-ended differential pairs. Plus, this technology will automatically improve pin escapes during routing while avoiding the use of vias.

Sketch routing allows a PCB designer to automatically fan out, untangle, and route specific nets with hand-route quality, as shown below.

Sketch
Autoroute

With sketch routing a designer quickly draws where routing should go by selecting an unrouted net and “dragging” the cursor on the rough path they’d like. Then the PADS Professional sketch router will automatically complete the routing one net at a time, all with little or no cleanup required. Even using components like BGAs, the sketch routing completes without using extra vias, achieving typical completion rates above 90 percent. The sketch routing in PADS Professional uses dynamic push-and-shove, along with real trace routing.

Summary

The old maxim that time is money still rings true for PCB projects today. Manual routing has been done for many years, yet it can take too much time and likely requires an experienced user. The sketch router feature in PADS Professional is a capable method to automate many routing tasks that used to be done manually, so your project can complete in less time, with fewer experts and higher quality.

Read the 8 page eBook online from Siemens.

Related Blogs

 


Silicon Creations is Enabling the Chiplet Revolution

Silicon Creations is Enabling the Chiplet Revolution
by Mike Gianfagna on 06-04-2024 at 6:00 am

Silicon Creations is Enabling the Chiplet Revolution

The multi-die chiplet-based revolution is upon us. The ecosystem will need to develop various standards and enabling IP to make the “mix and max” concept a reality. UCIe, or Universal Chip Interconnect express is an open, multi-protocol on-package die-to-die interconnect and protocol standard that promises to pave the way to a multi-vendor chiplet market. But delivering an implementation that balances all the requirements for power, performance and form factor can be quite challenging. At the recent IP-SoC Silicon Valley event, Silicon Creations presented a comprehensive strategy to overcome these challenges. Read on to see how Silicon Creations is enabling the chiplet revolution.

About Silicon Creations

Silicon Creations is a self-funded, leading silicon IP provider with development in the US and Poland, and a sales presence worldwide. The company provides world-class IP for precision and general-purpose timing (PLLs), oscillators, low-power, high-performance multi-protocol and targeted SerDes, and high-speed differential I/Os. Applications include smart phones, wearables, consumer devices, processors, network devices, automotive, IoT, and medical devices.

The majority of the world’s top 50 IC companies work with Silicon Creations. 1,000+ chips contain the company’s IP using over 700 unique IP products. Silicon Creations touches over 150 production tape-outs each year with over 400 customers, with 3nm designs in mass production. You can learn more about Silicon Creations at SemiWiki here.

About the Die-to-Die Interface Challenges

Blake Gray

Blake Gray developed a comprehensive presentation for IP-SoC Silicon Valley. He is the Director of Hardware Engineering at Silicon Creations. He’s been with the company for over 12 years. Unfortunately, he fell ill before the event and Jeff Galloway, Principal and Co-Founder at Silicon Creations stepped in to present for Blake. Let’s take a look at the excellent material Blake developed. It begins with a discussion of the design and performance challenges of transmit (TX) clock design.

Clock performance is critical; it is distributed to all TX subcomponents. Furthermore, UCIe employs a two-phase feed-forward clocking architecture where timing jitter is between clock and data edges. Optimizing the TX clock design is a critical element for effective die-to-die communications. If the die-to-die interface becomes the performance bottleneck, the advantages of a chiplet design are potentially lost, so the stakes are high.

Next were the four competing requirements that must be balanced for a successful design – low power, small form factor, ultra-low jitter, and a wide tuning range. For this last point, a clocking solution with a wide tuning range is useful as it can support all data rates required with no need to integrate data rate-specific solutions per project. This makes the whole design effort easier and more reusable. The figure below illustrates these design challengers and some of the design solutions required.

TX Clock Design and Performance Challenges

The Silicon Creations Approach

The presentation then focused on some of the work going on at Silicon Creations to address these challenges. A dedicated die-to-die ring PLL was described that is currently in development on TSMC 7nm FF, but is easily portable to other process nodes. The PLL can be driven by any quality clock source, or even a resonator-based oscillator.

Applying this clocking solution for a standard package, 32GT/s application with a 16-bit data width results in a maximum power for the physical layer = 24mW x 16 = 384mW. More details on power consumption and other parameters are summarized in the diagram below.

Other die-to-die solutions also exist for TS16/12/6/5/4/3/2. The presentation concluded by stating that the immense performance requirements of clocking solutions (ultra-low jitter, low power, wide tuning range, and small form factor) mandate careful design considerations and optimization tradeoffs. The Silicon Creations clocking/XO sustaining circuit IP portfolio is well-positioned to meet the demands of designs requiring optimal die-to-die communications.

To Learn More

You can see the full line of high-performance IP available from Silicon Creations here. If you would like to reach out to the company to learn more, you can do that here. And that’s how Silicon Creations is enabling the chiplet revolution.


Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!

Unlocking the Future: Join Us at RISC-V Con 2024 Panel Discussion!
by Daniel Nenni on 06-03-2024 at 10:00 am

Software Panel pix (1)

Are you ready to dive into the heart of cutting-edge computing? RISC-V Con 2024 is just around the corner, and we’re thrilled to invite you to a riveting panel discussion that promises to reshape your understanding of advanced computing. On June 11th, from 4:00 to 5:00 PM, at the prestigious DoubleTree Hotel in San Jose, California, join us for an insightful exploration into “How RISC-V is Revolutionizing Advanced Computing from AI to Autos!”

Moderated by the eminent Mark Himelstein from Heavenstone, Inc., our panel brings together luminaries in the field, each offering unique perspectives and invaluable insights. Dr. Charlie Su of Andes Technology, Dr. Lars Bergstrom from Google, Barna Ibrahim of RISC-V Software Ecosystem (RISE), and Dr. Sandro Pinto of OSYX Technologies will grace the stage, guiding us through the intricate landscape of RISC-V’s transformative power.

At the heart of our discussion lie two pivotal questions that demand attention:

What are people doing today or in the next 6 – 12 months with RISC-V?

RISC-V isn’t just a theoretical concept; it’s a driving force behind real-world innovation. Our esteemed panelists will delve into the current landscape, shedding light on the groundbreaking projects and initiatives underway. From artificial intelligence to automotive technology, discover how RISC-V is catalyzing progress across diverse industries, paving the way for a future defined by unprecedented efficiency and performance.

What are the key elements needed in the ecosystem in the next 6 – 12 months for RISC-V to make more progress? (Expect some security and hypervisor discussion.)

Progress knows no bounds, but it requires a robust ecosystem to thrive. Join us as we explore the essential components that will propel RISC-V forward in the coming months. From enhancing security measures to advancing hypervisor technology, our panelists will dissect the critical elements necessary to nurture RISC-V’s evolution. Be prepared for a thought-provoking discussion that will shape the trajectory of RISC-V development and adoption worldwide.

This panel discussion isn’t just an opportunity to witness industry leaders in action—it’s a chance to be part of the conversation that’s shaping the future of computing. Whether you’re a seasoned professional, an aspiring innovator, or simply curious about the latest advancements in technology, this event promises to inspire and enlighten.

So mark your calendars and secure your spot at RISC-V Con 2024! Join us on June 11th at the DoubleTree Hotel in San Jose, California, from 4:00 to 5:00 PM, and embark on a journey into the forefront of advanced computing. Together, let’s unlock the limitless potential of RISC-V and forge a path towards a brighter tomorrow.

REGISTER HERE

Also Read:

Andes Technology: Pioneering the Future of RISC-V CPU IP

A Rare Offer from The SHD Group – A Complimentary Look at the RISC-V Market

LIVE WEBINAR: RISC-V Instruction Set Architecture: Enhancing Computing Power


The Case for U.S. CHIPS Act 2

The Case for U.S. CHIPS Act 2
by Admin on 06-03-2024 at 8:00 am

America CHIPs ACT

Photo by Brandon Mowinkel on Unsplash

Despite murky goals and moving targets, the recent CHIPS Act sets the stage for long term government incentives.

Authored by Jo Levy and Kaden Chaung

On April 25, 2024, the U.S. Department of Commerce announced the fourth, and most likely final, grant under the current U.S. CHIPS Act for leading-edge semiconductor manufacturing. With a Preliminary Memorandum of Terms (or PMT) valued at $6.14B, Micron Technologies joined the ranks of Intel Corporation, TSMC, and Samsung, each of which is slated to receive between $6.4 and $8.5B in grants to build semiconductor manufacturing capacity in the United States. Together, these four allotments total $27.64B, just shy of the $28B that Secretary of Commerce Gina Raimondo announced would be allocated to leading-edge manufacturing under the U.S. CHIPS Act. The Secretary of Commerce has stated ambitions to increase America’s global share in leading-edge logic manufacturing to 20% by the end of the decade, starting from the nation’s current position at 0%. But will the $27.64B worth of subsidies be enough to achieve this lofty goal?

Figure #1, Data taken from NIST and the White House

To track achievement toward the 20% goal, one needs both a numerator and a denominator. The denominator consists of global leading edge logic manufacturing while the numerator is limited to leading edge logic manufacturing in the United States. Over the next decade, both the numerator and the denominator will be subject to large-scale changes, making neither figure easy to predict. For nearly half a century, the pace of Moore’s Law’s has placed the term “leading-edge” in constant flux, making it difficult to determine which process technology will be considered leading-edge in five years’ time. More recently, American chip manufacturing must keep pace with foreign development, as numerous countries are also racing to onshore leading-edge manufacturing. These two moving targets make it difficult to measure progress toward Secretary Raimondo’s stated goal and warrant a closer examination of the potential challenges faced.

Challenge #1: Defining Leading Edge and Success Metrics

The dynamic nature of Moore’s Law, which predicts the number of transistors on a chip will roughly double every two years (while holding costs constant), leads to a steady stream of innovation and rapid development of new process technologies. Consider TSMC’s progression over the past decade. In 2014, it was the first company to produce 20 nm technology at high volume production. Now, in 2024, the company is mass producing logic technology within the 3 nm scale. Intel, by comparison, is currently mass producing its Intel 4. (In 2021, Intel renamed its Intel 7nm processors to Intel 4.)

Today, the definition of advanced technologies and leading edge remains murky at best. As recently as 2023, TSMC’s Annual Report identified anything smaller than 16 nm as leading edge. A recent Trend Force report used 16 nm as the dividing line between “advanced nodes” and mature nodes. Trend Force predicts that U.S. manufacturing of advanced nodes will grow from 12.2% to 17% between 2023 and 2027, while Secretary Raimondo declared “leading-edge” manufacturing will grow from 0% to 20% by 2030. This lack of clarity dates back to the 2020 Semiconductor Industry Association (“SIA”) study which served as the impetus for the U.S. CHIPS Act. The 2020 SIA report concluded that U.S. chip production dropped from 37% to 10% between 1999 and 2019 based upon total U.S. chip manufacturing data. To stoke U.S. fears of lost manufacturing leadership, it pointed to the fast-paced growth of new chip factories in China, though none of these would be considered leading-edge under any definition.

A new 2024 report by the SIA and the Boston Consulting Group on semiconductor supply chain resilience further muddies the waters by shifting the metrics surrounding the advanced technologies discourse. It defines semiconductors within the 10 nm scope as “advanced logic” and forecasts that the United States’ position in advanced logic will increase from 0% in 2022 to 28% by 2032. It also predicts that the definition of “leading-edge” will encompass technologies newer than 3 nm by 2030 but fails to provide any projection of the United States’ position within the sub3 nm space. This begs the question: Should Raimondo’s ambition to reach 20% in leading edge be evaluated under the scope of what is now considered as “advanced logic”, or should the standard be held to a more rigorous definition of “leading edge”? As seen within the report, U.S. manufacturing will reach the 20% goal by a comfortable margin if “advanced logic” is used as the basis for evaluation. Yet, the 20% goal may be harder to achieve if one were to hold the stricter notion of “leading edge”.

Figure #2, Data taken from NIST

The most current Notice of Funding Opportunity under the U.S. CHIPS Act defines leading-edge logic as 5 nm and below. Many of the CHIPS Act logic incentives are for projects at the 4 nm and 3 nm level, which presumably meet today’s definition of leading-edge. Intel has announced plans to build one 20A and one 18A factory in Arizona, two leading edge factories in Ohio, bring the latest High NA EUV lithography to its Oregon research and development factory, and expand its advanced packaging in New Mexico. TSMC announced it will use its incentives for 4 nm FinFet, 3 nm, and 2 nm but fails to identify how much capacity will be allocated to each. Similarly, Samsung revealed its plans to build 4nm and 2 nm, as well as advanced packaging, with its CHIPS Act funding. Like TSMC, Samsung has not shared the volume production it expects at each node. However, by the time TSMC’s and Samsung’s 4 nm projects are complete, it’s unlikely they will be considered leading-edge by industry standards. TSMC is already producing 3 nm chips in volume and is expected to reach high volume manufacturing of 2 nm technologies in the next year. Both Intel and TSMC are predicting high volume manufacturing of sub-2nm by the end of the decade. In addition, the Taiwanese government has ambitions to break through to 1 nm by the end of the decade, which may lead to an even narrower criterion for “leading-edge.”

In this way, the United States’ success will be contingent on the pace of developments within the industry. So far, the CHIPS Act allocations for leading-edge manufacturing are slated to contribute to two fabrication facilities producing at 4 nm, and six at 2 nm or lower. If “leading-edge” were to narrow down to 3 nm technologies by 2030 as predicted, roughly a fourth of the fabrication facilities built for leading-edge manufacturing will not contribute to the United States’ overall leading-edge capacity.

If the notion of “leading-edge” shrinks further, even fewer fabrication facilities will be counted towards the United States’ leading-edge capacity. For instance, if the Taiwanese government proves to be successful with its 1 nm breakthrough, it would cast further doubt on the validity of even a 2 nm definition for “leading-edge”. Under such circumstances, the Taiwanese government will not only be chasing a moving target, but will shift the goalpost for the rest of the industry in the process, greatly complicating American efforts to reach the 20% mark. Thus, it becomes essential for the American leadership to keep track of foreign developments within the manufacturing space while developing its own.

Challenge # 2: Growth in the United States Must Outpace Foreign Development

Any measure of the success of the CHIPS Act must consider not only the output of leading edge fabrication facilities built in the United States, but also the growth of new fabs outside the United States. Specifically, to boost its global share of leading edge capacity by 20%, the U.S. must not only match the pace of its foreign competition, it must outpace it.

This means the U.S. must contend with Asia, where government subsidies and accommodating regulatory environments have boosted fabrication innovation for decades. Though Asian manufacturing companies will contribute to the increase of American chipmaking capabilities, it appears most chipmaking capacities will remain in Asia through at least 2030. For instance, while TSMC’s first two fabrication facilities in Arizona can produce a combined output of 50,000 wafers a month, TSMC currently operates 4 fabrication facilities in Taiwan that can each produce up to 100,000 wafers a month. Moreover, Taiwanese companies have announced plans to set up 7 additional fabrication facilities on the island, 2 of which include TSMC’s 2 nm facilities. In South Korea, the president has unveiled plans to build 16 new fabrication facilities through 2047 with a total investment of $471 billion, establishing a fabrication mega-cluster in the process. The mega-cluster will include contributions by Samsung, suggesting expansion of Korea’s leading-edge capacity. Even Japan, which has not been home to logic fabrication in recent years, has taken steps to establish its leading-edge capabilities. The Japanese government is currently working with the startup Rapidus to initiate production for 2 nm chips, with plans of a 2 nm and 1 nm fabrication facility under way. While the U.S. has taken a decisive step to initiate chipmaking, governments in Asia are also organizing efforts to establish or maintain their lead.

Asia is not the only region growing its capacity for leading edge chip manufacturing. The growth of semiconductor manufacturing within the E.U. may further complicate American efforts to increase its leading-edge shares by 20%. The European Union recently approved of the E.U. Chips Act, a $47B package that aims to bring the E.U.’s global semiconductor shares to 20% by 2030. Already, both Intel and TSMC have committed to expanding semiconductor manufacturing in Europe. In Magdeburg, Germany, Intel seeks to build a fabrication facility that uses post-18A process technologies, producing semiconductors within the order of 1.5 nm. TSMC, on the other hand, plans to build a fabrication facility in Dresden, producing 12/16 nm technologies. Though the Dresden facility may not be considered leading-edge, TSMC’s involvement could lead to more leading-edge investments within European borders in the near future.

In addition to monetary funding under the CHIPS Act, the U.S. also faces non-monetary obstacles that may hamper its success. TSMC’s construction difficulties in Arizona have been well-documented and contrasted with the company’s brief and successful construction process in Kumamoto, Japan. Like TSMC, Intel’s U.S. construction in Ohio has also faced setbacks and delays. According to the Center for Security and Emerging Technology, many countries in Asia provide infrastructure support, easing regulations in order to accelerate the logistical and utilities-based processes. For instance, during Micron’s expansion within Taiwanese borders, the Taiwanese investment authority assisted the company with land acquisition and lessened the administrative burden the company had to undergo for its construction. The longer periods required to obtain regulatory approvals and complete construction in the U.S. provide other nations with significant lead time to outpace U.S. growth.

Furthermore, the monetary benefits of CHIPS Act rewards will take time to materialize. Despite headlines claiming CHIPS Act grants have been awarded, no actual awards have been issued. Instead, Intel, TSMC, Samsung and Micron have received Preliminary Memorandum of Terms, which are not binding obligations. They are the beginning of a lengthy due diligence process. Each recipient must negotiate a long-form term sheet and, based upon the amount of funding per project, may need to obtain congressional approval. As part of due diligence, funding recipients may also be required to complete environmental assessments and obtain government permits. Government permits for semiconductor factories can take 12- 18 months to obtain. Environmental assessments can take longer. For example, the average completion and review period for an environmental impact statement under the National Environmental Policy Act is 4.5 years. Despite the recent announcements of preliminary terms, the path to actual term sheets and funding will take time to complete.

Even if the due diligence and term sheets are expeditiously completed, the recipients still face years of construction. The Department of Commerce estimates a leading-edge fab takes 3-5 years to construct after the approval and design phase is complete. Moreover, two of the four chip manufacturers have already announced delays in construction projects covered by CHIPS Act incentives. Accounting for 2-3 years to obtain permits and complete due diligence, 3-5 years for new construction, and an additional year of delay, it may be 6-9 years before any new fabs begin production. To achieve the CHIPS Act goal of 20% by 2030, the United States must do more than provide funding– it must ensure the due diligence and permitting processes are streamlined to remain competitive with Europe and Asia.

The Future of Leading-Edge in the United States

Between the constant changes in the meaning of “leading-edge” under Moore’s Law and the growing presence of foreign competition within the semiconductor industry, the recent grant announcements of nearly $28B for leading-edge manufacturing are only the start of the journey. The real test for the U.S. CHIPS Act will occur over the next few years, when the CHIPS Office must do more than monitor semiconductor progress within the U.S. It must also facilitate timely completion of the CHIPS Act projects and measure their competitiveness as compared to overseas expansions. The Department of Commerce must continually evaluate whether its goals still align with developments in the global semiconductor industry.

As such, whether the United States proves successful largely depends on why achieving the 20% target matters. Is the goal to establish a steady supply of advanced logic manufacturing to protect against foreign supply-side shocks, or is it to take and maintain technological leadership against the advancements of East Asia? If the former case, then abiding to the notion of “advanced logic” will suffice; the degree of such an achievement will be smaller compared to what was initially promised under “leading-edge”, but it remains a measured and sensible goal for the U.S. to achieve. If the latter case holds true, achieving the 20% benchmark would place the United States in a much stronger position within the global supply chain. To do so, however, will undoubtedly require much greater funding efforts towards leading-edge than the $28B that has been allocated.

National governments are increasingly investing efforts to establish a stronger productive capacity for semiconductors, and many will continue to do so in the succeeding decades. If the United States aims to keep pace with the rest of the industry, then it must maintain a steady stream of support towards leading-edge technologies. It will be an expensive initiative, but some leading figures such as Secretary Raimondo are already suggesting a secondary CHIPS Act to expand upon its initial efforts; In the global race, another subsidy package will provide the nation with a much needed push towards the 20% finish line. Hence, despite all the murkiness surrounding the United States’s fate within the semiconductor industry, one fact remains certain: the completion of the CHIPS Act should not be seen as the conclusion, but as the prologue to America’s chipmaking future.

Also Read:

CHIPS Act and U.S. Fabs

Micron Mandarin Memory Machinations- CHIPS Act semiconductor equipment hypocrisy

The CHIPS and Science Act, Cybersecurity, and Semiconductor Manufacturing

Why China hates CHIPS


Follow the Leader – Synopsys Provides Broad Support for Processor Ecosystems

Follow the Leader – Synopsys Provides Broad Support for Processor Ecosystems
by Mike Gianfagna on 06-03-2024 at 6:00 am

Follow the Leader – Synopsys Provides Broad Support for Processor Ecosystems

Synopsys has expanded its ARC processor portfolio to include a family of RISC-V processors. This was originally reported on SemiWiki last October. There is also a recent in-depth article on the make-up of the ARC-V family on SemiWiki here. This is important and impactful news; I encourage you to read these articles if you haven’t done so already. What I want to cover in this post is a broader perspective on what Synopsys is doing to provide holistic support for the entire processor ecosystem. I’ve always felt that the market leader should be expanding the market, creating new opportunities for not just itself, but for its current and future customers as well as the entire ecosystem at large. This is such a story. Follow the leader to see how Synopsys provides broad support for processor ecosystems.

The Organizational View

Kiran Vittal

The org chart for a company can tell a lot about strategy. In the case of support for the processor ecosystem at Synopsys there is information to be gleaned here. About two years ago, Kiran Vittal was named the Executive Director, Ecosystem Partner Alliances Marketing at Synopsys. In this role, Kiran has the charter to work with IP partners, foundry partners, EDA partners and the rest of Synopsys to optimize the EDA tool flows and associated IP for the markets served. Note there is no specific charter regarding, Arm, ARC or RISC-V. Kiran has it all.  This organizational setup is an important ingredient to facilitate holistic support for an entire ecosystem. Just the way a market leader should.

I’ve known Kiran for quite a while. We worked together at Atrenta (the SpyGlass company) before it was acquired by Synopsys. Kiran has exactly the right personality and technical depth (my opinion) to do this rather challenging job. Recently, I was able to speak with Kiran to get a long-overdue update. Here are some of the things I learned.

What Holistic Support Looks Like

Kiran began by describing the broad support Synopsys offers for implementation and verification for the growing RISC-V market. The strategy covers a lot of ground across architectural exploration, IP support, software development, DevOps, HW/SW verification, design servicers and a broad suite of simulation and verification methodologies. The figure below summarizes all this.

Leading Solutions for RISC V Implementation & Verification

As I mentioned earlier. Kiran’s perspective is NOT limited to the RISC-V market. Arm has also been a strong partner for many years. There is plenty going on across the spectrum of processor options. The graphic at the top of this post is a high-level summary of this strategy. Some details are useful:

For Arm

  • Deep collaboration on advanced nodes down to 2nm
  • Fusion QuickStart Kits & verification collateral available for latest cores
  • Opportunity to tune the cores to get the best out-of-the-box PPA

For Synopsys ARC/ARC-V

  • Enablement of ARC/ARC-V cores with Synopsys digital design and verification families
  • Fusion QuickStart Kits for high-performance ARC/ARC-V cores
  • Design services for customer enablement

For RISC-V Cores

  • Partnering with key customers, RISC-V core providers, foundries and universities
  • Building customized flows for implementation and verification
  • Fusion QuickStart Kits (QIKs) with SiFive® available now

Kiran explained that Synopsys has a growing list of ARC-V customers, but there are also a lot of customers who have chosen to source processor IP from other vendors. Once the customer chooses the processor IP, Synopsys can provide a rich set of EDA tools, flows and IP to support that choice. It is true that ARC has been part of Synopsys for quite a while. That means all ARC products enjoy a tight integration and validation with Synopsys tools and IP.

While this does provide a competitive advantage, Synopsys still maintains strong relationships across the processor ecosystem to ensure a rich experience regardless of processor choice. As we had this discussion, I kept thinking about my view of the way a market leader behaves.

There is a lot more to the ARC-V story. I’ll be providing links to learn more in a moment. But first, I want to share a really interesting case study regarding a RISC-V design.

AI Impact on Processor Design

AI is finding its way into every part of our lives. If you design advanced processors, this is true as well. Here are some compelling details for an AI-driven optimization of a RISC-V high performance CPU core.

The design is a RISC-V based “Big Core” targeted for data center applications. Its size is 426umx255um for a single core. The target process technology is 5nm.  The baseline (starting point) for this exercise was 1.75GHz at 29.8mW of power. This represents the out-of-the-box results from a RISC-V reference flow.

The desired target for this design was 1.95GHz at 30mW of power. The customer estimated hitting this target would take about one month for two expert engineers. Applying the Synopsys DSO.ai AI-driven RISC-V reference flow, 1.95GHz at 27.9mW of power was achieved over two days and 90 software runs, with no human effort. The expected area target was also met.

This is what the future of processor design looks like.

To Learn More

If you want to learn more about the Synopsys ARC-V processor IP family, you can find it here.  If you want to step back and look at the overall processor solutions available from Synopsys, look here or you can learn about the Synopsys overall strategy to support RISC-V here. And if you want to learn more about the revolutionary DSO.ai capability from Synopsys, check out this link.  And that’s how you can follow the leader to see how Synopsys provides broad support for processor ecosystems.


Podcast EP226: Lumotive’s Disruptive Optical Beamforming Technology With Dr. Sam Heidari

Podcast EP226: Lumotive’s Disruptive Optical Beamforming Technology With Dr. Sam Heidari
by Daniel Nenni on 05-31-2024 at 10:00 am

Dan is joined by Dr. Sam Heidari. Sam brings three decades of extensive management experience in the semiconductor sector. He currently holds the position of CEO at Lumotive, serves as Chairman at Arctic Semiconductor, and advises multiple technology startups. Previously, he served as CEO and Chairman at Quantenna Communications, CEO at Doradus Technologies, and CTO at Ikanos.

Dan and Sam begin by discussing the various applications for light in semiconductor systems. AI computing, holographic displays, high-performance compute and 3D sensing are all touched on.

Sam then describes the revolutionary new technology developed by Lumotive. Light Control Metasurface (LCM™) technology defies the laws of physics by dynamically shaping and steering light on the surface of a chip, without any moving parts. This new technology has many applications. Sam focuses on its ability to implement 3D sensing lidar systems with low cost and high precision for automotive applications.

The deployment of the technology in automotive platforms as well as edge computing are discussed with an assessment of the broader impact the technology will have going forward.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Nikos Zervas of CAST

CEO Interview: Dr. Nikos Zervas of CAST
by Daniel Nenni on 05-31-2024 at 6:00 am

Nikos Zervas CAST 2022

Dr. Nikos Zervas joined CAST company in 2010, having previously served as VP of Marketing and Engineering and as COO. Prior to working at CAST, Nikos co-founded Alma Technologies and served as its Chairman and CEO for nine years.  Under his leadership Alma Technologies bootstrapped to become a reputable IP provider, developing cutting-edge image and video compression cores and solutions. His research has been published in over 40 technical journal and conference papers, and he has been honored by the IEEE Computer Society with a Low-Power Design Contest award. Nikos served on the board of the Hellenic Semiconductor Industry Association from 2009 until 2013, where he was responsible for strategic planning.

Tell us about your company?
CAST provides IP cores that save time and money for electronic system developers across a wide range of industry segments and application areas.
Our current product line includes processors, compression, interfaces, peripherals, and security, and features industry-leading cores for automotive, industrial, defense, data centers, and other application areas. We focus on providing a better IP experience with every core, meaning high-quality products that are easy to use, backed by market-leading technical support, and available with simple, cost-effective licensing, including royalty-free terms.

CAST was established in 1993 and has helped to pioneer and shape today’s booming IP industry. The company is private and always internally held. This strategic avoidance of external investors or debt of any kind has given us the freedom to grow organically and focus on solving real customer problems rather than being driven primarily by the financial expectations of venture capitalists or the demands of shareholders.

CAST is a global company, with headquarters in New Jersey; staff in San Diego, Austin, and other US locations; offices in the Czech Republic, Italy, Greece, and Poland; and partners throughout Asia. We’ve sold thousands of products to hundreds of customers, with a repeat sales rate of over 50%. These customers range from established tier-1 firms to emerging start-ups, and together they are shipping millions of product units using IP from CAST.

We employ a novel and highly successful approach to developing IP products: our own engineering team develops many CAST IP cores, and we also work with tightly coupled partners who have outstanding technical experience in particular areas. We enforce the same stringent quality standards on every core that CAST offers and we provide front-line support for them all, so customers can get the most advanced IP technology available but work with a provably reliable and trustworthy provider. And for our development partners, it gives their engineering talent access to otherwise unapproachable markets and the advantages of what we believe is the industry‘s most experienced IP sales and marketing team.

What problems are you solving?
We strive to deliver the true promise of reusable IP cores across our whole product line. That is to help design groups complete SoCs in the shorter periods of time demanded by market conditions even though these designers may lack detailed personal knowledge of every piece of technology they must use to develop competitive and successful products. For example, a developer can reliably build a system around a functional safety RISC-V processor core from CAST without actually knowing how to design a processor or achieve functional safety status.

CAST enables SoC developers to focus on where they really excel. We provide them with effective, reliable IP cores that they can use to build their systems, we bundle these products with class-leading support should the developer need it during integration, and we offer all that with fair terms that work well for both parties. Our customers have peace of mind knowing that every IP core they get from CAST is going to do the job it’s supposed to do in an efficient way, without causing any trouble. Customers can thus bring products to market in 18 months or so rather than the decade it might take without IP cores.

What application areas are your strongest?
We’re proud to offer a product line that’s both broad and deep, including several stand-out IP cores for different application areas.

The Automotive market is racing along and we offer cores for interfaces—with CAN and TSN leadership—plus processors and compression, most enabling functional safety with ASIL-D readiness.

Aerospace and Defense contractors especially value our efficient networking stacks for communication and compression cores. Both the Mars Curiosity rover and the Hubble space telescope employ image compression IP licensed from CAST, and there are many more like those we cannot talk about.

Industrial Automation developers use our 32- and 8-bit microcontrollers (the 8051 still lives!), networking stacks and interface cores. TSN is hot in this market and interest in and single-pair Ethernet is growing rapidly.

Data Centers take advantage of our hardware compression engines to optimize storage and bandwidth, as well as our encryption and TCP/UDP/IP accelerators to keep up with today’s staggering networking speeds without throttling their CPUs.

Wearables and Internet of Things systems take advantage of our ultra-low power embedded processors, peripheral-subsystems, compression, and interfaces.

CAST customers typically demand confidentially to preserve their competitive advantage so we can’t really talk about who has fielded which products. But it’s true that IP from CAST is probably closer to every reader of this interview than they might realize:

Your smartphone and smartwatch are more likely using some CAST IP,
Your cloud data is physically stored in data centers that probably use our compression and networking stacks,
The sensors, ECUs, braking and other systems in your car are most likely communicating over IP from CAST,
The security cameras in your local airport (or even in your house) could be using compression from CAST, and
That Mars or space wallpaper photo on your computer has probably been compressed with IP CAST provided.

What keeps your customers up at night?
CAST customers have no shortage of stress factors, but with respect to IP their greatest fear is experiencing immense delays or even entire re-spins because of bad IP. We address this natural fear in several ways.

First, we enforce experience-based standards for reusability and quality on every core, from its high-level architecture down to its coding, documentation, and packaging.

Second, we don’t cut corners, especially as far as verification is concerned. We never treat a core’s first customers as beta testers, and we have learned that getting a new core to market quicker is not worth customers getting their product to market later.

And third, we invest in a support infrastructure ensuring that the questions or issues customers inevitably may have are resolved with speed and satisfaction.

The latter, technical support responsiveness, can be a real frustration point for many IP core users, who sometimes wait days or weeks for a response to their urgent queries. Instead, at CAST we have a 24/7 support mentality and coverage network and have the actual IP developers available to assist. Our average first response time is currently under 24 hours, and our resolution time is less than three days. This is how we demonstrate respect to our customers.

What does the competitive landscape look like and how do you differentiate?
While the dozens—hundreds?—of “IP core companies” that once existed has been reduced through failure or acquisition, those who have thrived like CAST have either focused on technical niches where they shine or learned to compete with price, perceived product quality, comprehensive services, or some combination.

At CAST, we don’t tend to worry about particular competitors but instead focus on providing A Better IP Experience for IP customers. This means reducing risk by providing proven, high-quality IP products backed by fast, effective technical support and available with fair, flexible, and typically royalty-free licensing.

While we have a strong—and growing—engineering group within CAST, our model of close, long-term partnerships with select domain experts enables us to offer a product line that we believe is uniquely both broad and deep.

Ranging from commodity cores through leading-edge functions, developers can find—and trust—many of the cores they need in the CAST product line. We provide the reliability, stability, and services of a large corporate IP provider but with agility and relationship-building more akin to a start-up.

Another competitive advantage CAST offers is less tangible but equally important. Despite being geographically widespread, we work hard to build a close, strong team, treating every employee, partner, and customer with respect and integrity. While some days are certainly challenging, the CAST team overall enjoys their work and shares in the company mission, and this attitude very positively impacts the world in which we operate.

What new features/technology are you working on?
We don’t usually launch or even promote products until they are fully verified, packaged, and ready for immediate successful use. But I can share a few things just for your readers.

Soon we’ll introduce 100G and 400G TCP/UDP/IP hardware stack that we think will be the first to support both IPv4 and IPv6.

We’re also enhancing our offerings in the area of TSN Ethernet—in collaboration with our partner at Fraunhofer IPMS—with 10G cores that better address the needs of industrial automation, automotive, and aerospace system developers.

Single Pair Ethernet (SPE) is another hot area these days, and you’ll soon see us collaborating with other members of the SPE Alliance to showcase some exciting products.

Yet another area of focus for CAST is that of embedded processors, their peripherals, subsystems, and security. Adding more new RISC-V cores with improved security features and better performance is on our roadmap for this year, as well as a series of security function accelerators and peripherals.

How do customers normally engage with your company?
More than 50% of new sales are to repeat customers, who will just call or email us directly.

We do regular outreach at events, with editorial placements, IP portals, select advertising, and social media, but most new customers find us through our website, which we first launched in 1994.

Our commitment to saving SoC developers considerable time begins with their first contact with CAST online. We freely post a considerable amount of technical information about every core, which helps the developer determine if a particular core may be right for them, even before they contact us.

Once they do contact us, our team of experienced sales engineers and global representatives are ready to help them choose the best IP for their exact needs. Eventual system success begins with choosing the right IP cores, and we never propose a non-optimum solution just to close a sale.

Sometimes the potential customer is exploring a technical area in which they lack personal experience—automotive ASIL D certification, for example—and our team has the know-how and aptitude to help them understand and work through the best fit for their needs.

Our Better IP Experience philosophy thus extends from our first interaction with a potential customer through their successful production and shipping of products employing IP cores from CAST. We believe that this approach—as well as our product line—sets CAST apart as one of the best IP partners a SoC or FPGA developer might find.

Also Read:

The latest ideas on time-sensitive networking for aerospace


How does your EDA supplier ensure software quality?

How does your EDA supplier ensure software quality?
by admin on 05-30-2024 at 10:00 am

fig1 anacov components

In the fast-paced world of electronic design automation (EDA) software development, maintaining high code quality while adhering to tight deadlines is a significant challenge. Code coverage, an essential metric in software testing, measures the extent to which a software’s source code is executed in tests. High code coverage is indicative of thorough testing, suggesting a lower likelihood of undiscovered bugs. However, achieving and maintaining this can be resource-intensive and time-consuming. This is why AnaCov, our proprietary software code coverage solution, became a game-changer for Siemens EDA’s Caibre software. EDA users should have confidence in their supplier’s ability to meet very high software quality standards, so we invite you to learn more about how we do it.

AnaCov: The concept

We developed the sophisticated AnaCo tool to facilitate code coverage testing by mapping test cases to functions and source files. It utilizes coverage data obtained from GCOV, stored in Git repositories, and operates within an SQL database framework. This innovative approach allows quality assurance (QA) engineers to efficiently track and analyze code coverage over time. The primary goal is to ensure comprehensive testing while minimizing the use of time and disk space.

Why a tool like AnaCov is needed for modern software testing

As software becomes increasingly complex, the need for effective testing methods has never been more critical. Traditional approaches to code coverage can often lead to significant consumption of disk space and processing time, particularly when dealing with large volumes of test cases. AnaCov addresses these challenges by providing a streamlined, efficient method for tracking and analyzing code coverage.

AnaCov’s core features

  • Resource Optimization: AnaCov is designed to manage large-scale testing efficiently, reducing the time and disk space typically required for comprehensive code coverage analysis.
  • User-Friendly Interface: With a single command line interface, AnaCov is accessible to users of varying expertise levels, from seasoned QA engineers to newcomers.
  • Advanced Code Mapping: The tool’s ability to map test cases to specific source files and functions is crucial for targeted testing, ensuring that new code additions are thoroughly vetted.
  • Versioned Coverage Tracking: AnaCov enables QA teams to track coverage over different development stages, offering insights into long-term code quality and maintenance.

The working principle of AnaCov

AnaCov operates by taking GCOV run data as input and producing detailed coverage reports. These reports highlight the percentage of code covered by tests, offering invaluable insights to developers and QA engineers. The tool’s data storage module, a central component of its functionality, comprises local storage and a database. The local storage serves as a centralized space for coverage data, enhancing accessibility and management across various projects.

AnaCov’s database component stores the coverage output data generated during code analysis. Leveraging database capabilities, it facilitates instant retrieval and utilization of coverage data, enabling informed decision-making and tracking of code coverage progress. The components of AnaCov are shown in figure 1.

Fig. 1. AnaCov components

AnaCov’s unique approach to coverage analysis

The philosophy behind AnaCov focuses on optimizing resources and enhancing usability. AnaCov addresses the challenge of disk space consumption by archiving only essential files needed for generating coverage reports. Its run process analyzes the codebase and the test suite to determine the extent of coverage, feeding this data into the database and potentially committing it to a Git remote repository for collaborative analysis.

One of AnaCov’s standout features is its ability to merge multiple coverage archives. This function is particularly beneficial in large-scale projects with separate coverage reports for different components. By combining these individual archives, AnaCov offers a unified view of code coverage, helping teams understand the test coverage across the entire codebase.

Incremental coverage and historical tracking

A key feature of AnaCov is its incremental coverage capability, which allows QA engineers to measure coverage for newly added or modified code, separate from the entire codebase. This feature not only speeds up the testing process but also optimizes it by focusing only on the relevant code changes.

Moreover, AnaCov incorporates a historical coverage feature using the Git version control system. This feature maintains a comprehensive record of code coverage status across different development milestones, facilitating a deeper understanding of the evolution of test coverage over time.

AnaCov’s impact on software testing

AnaCov’s introduction significantly impacts software testing, addressing critical issues faced by QA engineers and developers. Its ability to efficiently track code coverage and analyze new code additions ensures that software quality is not compromised under tight development schedules. By enabling the merging of multiple coverage reports into a single, comprehensive document, AnaCov proves to be an invaluable tool, leading to higher quality code and more efficient testing processes.

Real-world application and benefits

In practical terms, AnaCov has shown remarkable results in real-world applications. For instance, using AnaCov for the Siemens EDA Calibre product reduced the disk space for a full coverage run from 670 GB to just 20 GB. This significant reduction demonstrates AnaCov’s effectiveness in optimizing resource usage in software testing environments.

Furthermore, AnaCov’s mapping functionality is vital for efficiently conducting code coverage analysis. By establishing mappings between test cases and code components, QA engineers can easily determine which test cases cover specific source files or functions. This targeted approach saves time and resources while ensuring comprehensive code coverage.

Conclusion

The end users of complex EDA software deserve to know how their vendors ensure high-quality software. AnaCov represents a significant advancement in the field of software testing. Its innovative approach to code coverage analysis addresses the critical challenges of resource optimization, usability, and efficiency. By offering detailed insights into code coverage and enabling efficient tracking of new code additions, AnaCov plays a crucial role in improving software quality. Its integration into the software development process marks a step forward in ensuring robust, high-quality software products.

Also Read:

The secret to Calibre software quality – AnaCov, our in-house code coverage analysis tool

AnaCov – A novel method for enhancing coverage analysis

How to Find and Fix Soft Reset Metastability