Banner 800x100 0810

Who Are the Next Anchor Tenants at DAC? #61DAC

Who Are the Next Anchor Tenants at DAC? #61DAC
by Mike Gianfagna on 07-11-2024 at 10:00 am

DAC Roundup – Who Are the Next Anchor Tenants at DAC?

#61DAC is evolving. The big get bigger and ultimately focus on other venues for customer outreach and branding. This is a normal evolution in any industry. For EDA, it was noticed by many that Cadence and Synopsys have downsized their booths at DAC. Everyone knows CDNLive and SNUG are very successful events for these companies and so this change shouldn’t come as a surprise. There may be other examples of this trend as the industry matures. The interesting part to focus on is who will be the next wave of anchor tenants at DAC? There are clearly some new entrants to DAC that are gaining momentum fast. The how and why of this phenomenon is interesting. I had the opportunity to speak with an executive from one such company at DAC. The conversation was both enlightening and inspirational. Let’s examine who are the next anchor tenants at DAC.

Altair Company Profile

Before getting into the profile of Altair, an observation about the focus of #61DAC is relevant. The conference tagline is now The Chips to Systems Conference. This is not a marketing slogan, it’s a statement about where the electronics industry is going. That is, electronic systems are becoming the critical enabler for a growing class of systems.

So, the question to ask as we look for the next anchor tenants at DAC is this – which companies have a broad enough footprint to enable systems with electronics? Altair is one such company and is one to watch as the next crop of DAC anchor tenants move in. You can learn about the breadth and focus of Altair on SemiWiki here. A short excerpt from the Altair website will explain a lot as well:

Changing Tomorrow, Together

When data science meets rocket science, incredible things happen. The innovation our world-changing technology enables may feel like magic to users, but it’s the time-tested result of the rigorous application of science, math, and Altair.

Our comprehensive, open-architecture simulation, artificial intelligence (AI), high-performance computing (HPC), and data analytics solutions empower organizations to build better, more efficient, more sustainable products and processes that will usher in the breakthroughs of tomorrow’s world. Welcome to the cutting edge of computational intelligence – no magic necessary.

In my opinion, this is the stuff a DAC anchor tenant should be made of.

Altair – the Backstory

Sarmad Khemmoro

I had the good fortune to spend some time with Sarmad Khemmoro at DAC. Many thanks to Dan Nenni for setting it up. Sarmad is currently the senior VP of Product & Strategy – Electronics Design & Simulation at Altair. He clearly sees the opportunity in the global electronics market in general and at DAC in particular. He has a storied career with senior technical and strategy leadership roles at companies such as Mentor, Innoveda and Viewlogic. He knows the technology behind chip design and the ecosystem that uses it.

He’s a natural fit to help Altair take a growing role in the world of electronic systems. He shared some valuable information during our meeting.

He began with a discussion of convergence. ECAD, MCAD, PLM and many other disciplines are now coming together, either through acquisition or partnership to create the required technology stack to realize tomorrow’s world-changing products. For Altair, three focus areas are silicon debug, 3DIC multi-physics – from chip to PCB to systems and job scheduling and license management. A broad footprint that is growing.

Altair has grown and will continue to grow through acquisitions. The company seems to have cracked the code for that process. It turns out many of the CEOs of acquired companies are still with Altair. That speaks volumes about the quality of the workplace and the commitment it has to its employees. We also talked about Altair’s customers – the list includes many household names.  Altair is also very strong in the automotive industry. This will be a strategic advantage as that market continues to consume more semiconductors. Sarmad is located in Detroit, so he’s up close and personal on this front.

We also discussed AI and digital twins. Altair has capabilities in both areas and Sarmad is quite familiar with the company’s strategy. The list of industries supported by Altair is quite extensive as shown in the figure below. This is a company with substantial reach.

Primary Industries Supported

Sarmad also discussed Altair’s unique and patented licensing model. The company basically re-wrote the rule book regarding tool licensing. The process is driven by something called Altair Units. Purchasing units gives users full access to all Altair software tools whenever they need them, and they can determine when, where, and how they want to use different tools without needing to worry if they’re eligible for access.

This approach removes a lot of the uncertainty and overhead associated with specific tool licensing. Altair has a long list of partners and through the Altair One™ Marketplace partner software can also be accessed with Altair Units, simplifying even more of the process.

Post-DAC Update – the Momentum Continues

Altair is clearly on the move. Its acquisition machine is in high gear with a recent announcement signaling its intention to acquire Metrics Design Automation, further expanding the company’s footprint in EDA. Metrics is a Canadian company that has developed a game-changing simulation as a service (SaaS) business model for semiconductor electronic functional simulation and design verification.

Combining the Metrics simulator with Altair’s silicon debug tools will result in a world-class, advanced simulation environment with superior simulation and debug capabilities. Note that Metrics is led by Joe Costello, who is something of a folk hero in EDA.

Tight relationships in the semiconductor ecosystem are a key attribute of any DAC anchor tenant. There was also a recent announcement that Altair has joined the Samsung Advanced Foundry Ecosystem, known as SAFE™. Altair and Samsung Electronics will combine Altair’s comprehensive EDA technology with Samsung Foundry’s manufacturing capabilities to establish a more innovative, more efficient semiconductor design and production process.

To Learn More

My conversation with Sarmad left an impact. The breadth of Altair’s tools is substantial, and the company has a vision to grow in key markets to further dominate the landscape. You can explore Altair’s capabilities for semiconductor design and EDA here. If you want to take the grand tour of all industries supported, you can do that here.

You can also see the full announcement about the Metrics acquisition here and the full Samsung SAFE announcement here.

So, the next time you wonder who are the next anchor tenants at DAC, think Altair.


AI Booming is Fueling Interface IP 17% YoY Growth

AI Booming is Fueling Interface IP 17% YoY Growth
by Eric Esteve on 07-11-2024 at 6:00 am

IF 2018 2027no$

AI explosion is clearly driving semi-industry since 2020. AI processing, based on GPU, need to be as powerful as possible, but a system will reach optimum only if it can rely on top interconnects. The various sub-system parts (memory, processor, co-processor, network) need to be connected with interface links with ever more bandwidth and lower latency: DDR5 or HBM memory controller, PCIe and CXL, 224G SerDes and so on. When you design a supercomputer, raw processing power is important, but the way you access memory, latency and network speed optimization will allow you to succeed. It’s the same with AI, that’s why interconnects protocols are becoming key.

In 2023, the semiconductor market declined, but the interface IP segment grew by 17%. Our forecast shows stronger growth for years 2024 to 2028, comparable to 20% growth in the 2020’s. AI is driving the semiconductor industry and Interconnect protocols efficiency are fueling AI performance. Virtuous cycle!

The interface IP category has moved from 18% share of all IP categories in 2017 to 28% in 2023. In 2024, we think this trend will amplify during the decade and Interface IP to grow to 38% of total (detrimental to processor IP passing from 47% in 2023 to 41% in 2028).

As usual, IPnest has made the five-year forecast (2024-2028) by protocol and computed the CAGR by protocol (picture below). As you can see on the picture, most of the growth is expected to come from three categories, PCIe, memory controller (DDR) and Ethernet & D2D, exhibiting 5 years CAGR of resp. 19%, 23% and 22%.

It should not be surprising as all these protocols are linked with data-centric applications! If we consider that the weight of the Top 5 protocols was $1820 million in 2023, the value forecasted in 2028 will be $4390 million, or CAGR of 19%.

This forecast is based on amazing growth of data-centric applications, AI in short. Looking at TSMC revenues split by platform in 2023, HPC is clearly the driver. This has started in 2020, we expect this trend to continue up to 2028, at least.

Conclusion

Synopsys has built a strong position on every protocol -and on every application, enjoying more than 55% market share, by doing strategic acquisitions since the early 2000’s and by offering integrated solutions, PHY and Controller. We still don’t see any competitor in position of challenging the leader. Next two are Cadence and Alphawave, with market share in the 12%, far from the leader.

In 2024, we think that a major strategy change will happen during the decade. IP vendors focused on high-end IP architecture will try to develop a multi-product strategy and market ASIC, ASSP and chiplet derived from leading IP (PCIe, CXL, memory controller, SerDes…). Some have already started, like Credo, Rambus or Alphawave. Credo and Rambus already see significant revenues results on ASSP, but we will have to wait to 2025, at best, to see measurable results on chiplet.

This is the 16th version of the survey, started in 2009 when the Interface IP category market was $250 million (in 2023 $1980 million), and we can affirm that the 5 years forecast stayed within +/- 5% error margin!

IPnest predict in 2024 that the interface IP category in 2028 will be in the $4750 million range (+/- $250), and this forecast is realistic.

If you’re interested by this “Interface IP Survey” released in July 2024, just contact me:

eric.esteve@ip-nest.com .

Eric Esteve from IPnest

Also Read:

Semi Market Decreased by 8% in 2023… When Design IP Sales Grew by 6%!

Interface IP in 2022: 22% YoY growth still data-centric driven

Design IP Sales Grew 20.2% in 2022 after 19.4% in 2021 and 16.7% in 2020!


Will Semiconductor earnings live up to the Investor hype?

Will Semiconductor earnings live up to the Investor hype?
by Claus Aasholm on 07-10-2024 at 10:00 am

NVIDIA Fastest Growing Semiconductor company

The state of the Semiconductor Industry before the earnings season. This post will give the industry’s status before the results are revealed. We are sharing the information available.

The first Q2 swallows
A few companies with quarters not aligned to calendar quarters have reported. Nvidia was slightly ahead of expectations, and the stock price made the company the most valuable in periods of June. All of it is driven by the data centre and H100 AI sales.

Broadcom reported disappointing semiconductor revenue, only saved by AI Network and Accelerator sales to Meta and Google. Marvell painted a similar picture with everything down except the data centre business. This is not a good sign for the broader earnings season coming up. (Broadcom result)

Lastly, Micron showed 17% growth, mainly due to memory price increases, and only the storage business was growing in bits sold. Even the computer was flat in bits sold, indicating that Micron is not getting much action from Nvidia. (Micron result)

The closure of Q1
The total revenue of Semiconductor companies was flat in Q1-24 compared to the prior quarter, but the overall growth compared to Q1-23 was quite strong. 29% growth signals the industry is well into the cyclical recovery period (Long-term growth is currently at 8%).

If Nvidia’s strong growth is excluded, the growth falls to under 10% or close to the long-term level.

The exclusion of Nvidia revenue makes Foundry revenue growth very similar to the growth of Semiconductor companies, highlighting that Nvidia revenue is mostly profit, and only 15% makes a mark on Foundry revenue.

The four growth curves represent the semiconductor time machine; while imperfect, they allow a peek into the future of the Semiconductor Companies.

With zero inventory movements, the time machine works like this:

The revenue of Tools, Materials, & Foundries is a chain of events predicting the revenue of semiconductor companies. While it can be used to predict individual results of some of the largest Semiconductor companies, is works better as an overall indicator of the industry.

The negative growth of materials and the drop in foundry revenue does not suggest a strong recovery in the Q2 results and the tools revenue is not a solid longer term indication of revenue expansion.

Mean revenue results
With Nvidia’s strong performance clouding general industry insights, it is worth looking at a Box and Whiskers plot based on mean values.

This is a way of investigating industry growth with the outsized impact of the outliers. Here, it becomes obvious that not only is Nvidia driving the overall growth but also the Korean memory companies led by SK Hynix, which are currently winning HBM at Nvidia.

The mean growth for semiconductor companies compared to Q1-23 is 0.2%, indicating that the AI pocket of growth is the only action in the Semiconductor Industry in Q1.

Median growth for tool companies is positively impacted by the good performance of Chinese tool companies.

Revenue growth by Manufacturing Model
We divide Semiconductor companies into three different categories:

1) Integrated Device Manufacturers: Traditional model with fabs.

2) Fabless Semiconductor Companies: Companies exclusively using foundries

3) Mixed Manufacturing Model: Analog and power fabs with high-end digital outsourced to foundries.

The relative growth for Fabless is strong, but the impact of Nvidia accounts for most of the development. Without Nvidia, the result is 4%. The IDMs are lifted by the increase in memory pricing rather than bit growth. The mixed model companies have seen significant declines over the last two quarters.

The Inventory Situation
The inventory position for different areas of the supply chain can reveal how much of a surprise the current revenue level represents. If revenue is unfolding in line with the quarterly manufacturing plan, you would expect to see a decrease in inventory as companies try to optimise their inventory. The exception is if companies are running on low inventory, which is not the case in the current market environment (with notable exceptions for Nvidia and the company’s supply chain.

The chart shows the inventory days according to the supply chain position. As foundries and semiconductor companies have been depleting inventory compared to Q1-23, the materials companies were still struggling with the last pile-up collision.

The Q1-24 increase in inventory is driven by lower-than-expected demand from the end markets, which slams through the supply chain. This will likely continue into Q2-24 as neither foundries nor semiconductor companies invest in materials to support a potential Q2-24 revenue increase.

World Semiconductor Trade Statistics (WSTS)
WSTS just released their Semiconductor trade statistics for May, which showed another monthly increase. While this should be a good signal, there are issues with how WSTS accounts for semiconductor revenue.

WSTS only gets monthly reports from its members. The reporting is screened by a third-party accountant who shields the identity of the reporting company, so WSTS does not know who reports what, only what products were sold. As many important companies are not members, WSTS has to guess about their revenue numbers by month. This problem is growing with the revenue of Nvidia, which is not a member of WSTS and now accounts for more than 8.6B/month or more than 17% of total WSTS revenue. A year ago, it was 2.4B$/month. In addition, neither Intel, AMD, nor Broadcom are members of WSTS.

This makes WSTS numbers very unpredictable and not very useful for making predictions anymore.

TSMC update
As TSMC reports monthly revenue, it is possible to see Q2 revenue already. While it is a TSMC Record, the quarter is slightly above Q4-22 and Q4-23.

TSMC’s strong quarter suggests an uptick in market activity. It is hard to judge if this is broad-based or still AI-centric. Apart from Nvidia, TSMC will manufacture AI GPUs for Intel and AMD this quarter. This could signal that AMD and Intel are expecting meaningful AI orders. Whether this materialises is another matter entirely. Also, TSMC is winning orders from Samsung’s foundry business, which is struggling to get good yields on leading-edge nodes.

Semiconductor Operating Profits
The operating profits for Semiconductor companies compared to Q1-23 looks incredibly good with over 300% growth, while the rest of the supply chain have meager results.

As the Q1-23 view is taken from the Semiconductor cycle minimum it involves memory companies starting in negative and ending in postive which does not tell the full story. Turning the dial back to Q1-22 gives a significantly different view, where all of the supply chain operating profit growth is under water.

It is also worth noting that Nvidia is now dominating the total operation profit of the Semiconductor companies, skewing the graphs dramatically.

While we still wait for most Semiconductor companies to publish the Q2-24 result, the division between Nvidia and the rest of the industry is clear. In Q1, Nvidia accounted for more than half the semiconductor operating profit. This is likely to be the case in Q2 also.

The Stock market perspective
While we do not try and predict share prices, we do not mind comparing business development with increases in share prices.

We understand that revenue is not the only important element in a company valuation, but it is incredibly important for semiconductor companies to have revenue growth. Without revenue growth, it is difficult to make meaningfull gains in free cashflow which is more important in valuations.

We use the Philadelphia Semiconductor Index (SOXX) as a good proxy for the collective share price of semiconductors. As can be seen, in the graph below, the current share gains are not justified by a similar gain in revenue growth.

From an operating profit perspective, the increase in share price looks more justified, while is should be noted that Nvidia is driving both.

Adding a comparison from Q1-22 gives a different view, where none of the supply chain sectors have retuned to an operating profit at the level of Q1-22

Conclusion
While there is a lot of semiconductor optimism before the current earnings season, there is not a lot of evidence that there is significant revenue growth or inventory depletion that indicate a general upturn. The optimism surrounding the WSTS numbers does not point to a general upturn as they are dominated by Nvidia’s hyper growth and the increasing revenue of the memory companies due to price increases. The memory volume is not increasing.

TSMC will be reporting healthy numbers but not anything that goes through the roof. The good result will be dominated by supplies of AI products for Nvidia, Intel, AMD and Broadcom. It will be interesing to see if the Semiconductor companies can turn these products into revenue. We will have a special focus on Intel as the company will need to show results soon.

If you are an investor or another stakeholder in the Semiconductor Industry, you can gain insights from our updates as the Semiconductor companies reports Q2 results.

Also Read:

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth

Electronics Turns Positive


Production AI is Taking Off But Not Where You Think

Production AI is Taking Off But Not Where You Think
by Bernard Murphy on 07-10-2024 at 6:00 am

TinyML

AI for revolutionary business applications grabs all the headlines but real near-term growth is already happening, in consumer devices and in IoT. For good reason. These applications may be less eye-catching but are eminently practical: background noise cancellation in earbuds and hearing aids, keyword and command ID in voice control and face-ID in vision, predictive maintenance and health and fitness sensing. None of these require superhuman intelligence or revolutions in the way we work and live yet they deliver meaningful productivity/ease-of-use improvements. At the same time, they must be designed for milliwatt-level power levels and must be attractive to budget-conscious consumers and enterprises aiming to scale. Product makers in this space are already actively building and selling products for a wide range of applications and now have a common interest group (not yet standards) in the tinyML Foundation.

Requirements and Opportunity

Activity around tiny ML is clear, but it’s worth stressing that the tinyML group isn’t (yet) setting hard boundaries on how a product qualifies to be in the group. However, per Elia Shenberger (Sr. Director Biz Dev, Sensors and Audio at CEVA) one common factor is power, less that a watt for the complete device, and milliWatts for the ML function. Another common factor is ML performance, up to hundreds of Gigaops per second.

These guidelines constrain networks to be small ML models running on battery-powered devices. Transformers/GenAI are not in scope (though see the end of this blog). Common uses will be for sensor data analytics for remote deployment with infrequent maintenance, and for always-on functions such as voice and anomalous sound detection or visual wake triggers. As examples of active growth, Raspberry PI (with AI/ML) is already proving very popular in industrial applications, and ST sees TinyML as the biggest driver of the MCU market within the next 10 years.

According to ABI Research, 4 billion inference chips for tinyML devices are expected to ship annually by 2028 with a CAGR of 32%. ABI also anticipate that by 2030 75% of inference-based shipments will run on dedicated tinyML hardware rather than general purpose MCUs.

A major factor in making this happen will almost certainly be cost, both hardware and software. Today a common implementation depends on an MCU for control and feature extraction (signal processing), followed by an NPU or accelerator to run the ML model. This approach incurs a double royalty overhead and will certainly result in a larger chip area/cost. It will also promote greater complexity in managing software, AI models, and data traffic between these cores. In contrast, single-core solutions with out-of-the-box APIs, libraries, and ported models based on open model zoos are going to look increasingly appealing.

Ceva-NeuPro-Nano

Ceva is already established in the embedded inference space with their NeuPro-M family of products. Recently they extended this family by adding NeuPro-Nano to address tinyML profiles. They claim some impressive stats versus alternative solutions: 10X higher performance, 45% die area, 80% lower on-chip memory demand and 3X lower energy consumption.

The architecture allows them to run control code, feature extraction and the AI model all within the same core. That reduces the burden on the MCU, allowing a builder to go with a smaller MCU or even dispense with that core altogether (depending on application). To understand why, consider two common tinyML applications: wake-word/command extraction from voice, and environmental noise cancellation. In the first, feature extraction consumes 36% of processing time, with the balance in the AI model. In the second, feature extraction consumes 68% of processing time versus the AI model. Clearly moving these into a common core with dedicated signal processing plus an ML engine is going to outperform a platform splitting feature extraction and AI model between 2 cores.

The NeuPro-Nano neural engine to run the AI model is scalable, supporting multiple MAC configurations and ML performance is further boosted through sparsity acceleration and activation acceleration for non-linear types such as sigmoid.

Proprietary weight compression technology dispenses with need for intermediate decompression storage, handling on-the-fly decompression as needed. Which significantly reduces need for on-chip SRAM – more cost reduction.

Power management is a key component in meeting tinyML objectives. Clever sparsity management minimizes calculations with zero weights, dynamic voltage and frequency scaling (tunable per application) can significantly reduce net power, and weight sparsity acceleration also reduces energy/bandwidth communication overhead.

Finally the core is designed to work directly with standard inference frameworks – TensorFlow Lite for Microcontrollers and μTVM – and offers a tinyML Model Zoo covering voice, vision and sensing use-cased and based on open libraries, pre-trained and optimized for NeuPro-Nano.

Future proofing

Remember that point about tinyML being a collaboration rather than a standards committee? The initial aims are quite clear; however these continue to evolve at least in discussion as applications continue to evolve. Maybe the ceiling for power will be pushed up, maybe bit-widths should cover a wider range to support on-device training, maybe some level of GenAI should be supported.

Ceva is ready for that. NeuPro-Nano already supports 4-bit to 32-bit accuracies as well as native transformer computation. As the tinyML goalposts move, NeuPro-Nano can move with them.

Ceva-NeuPro-Nano is already available. You can learn more HERE.

 


Facing challenges of implementing Post-Quantum Cryptography

Facing challenges of implementing Post-Quantum Cryptography
by Don Dingee on 07-09-2024 at 10:00 am

Template Whitepaper promotion rectangulaire 1

While researchers continue a march for more powerful quantum computers, cybersecurity measures are already progressing on an aggressive timeline to avoid potential threats. The urgency is partly in anticipation of a “store-now-decrypt-later” attack where compromised data, seemingly safe under earlier generations of encryption technology, is gathered and kept until quantum computers grow powerful enough to enable future decryption. Hardware lifecycles are also on the minds of many, where chips developed using classical pre-quantum algorithms will abruptly become obsolete. Secure-IC outlines the approach needed to confront the industrial challenges of implementing Post-Quantum Cryptography (PQC) in its new white paper.

Revisiting the algorithms and planning a transition

RSA became the de facto standard in encryption technology in the late 1970s. It combines short decryption times with unreasonably long crack times thanks to long key lengths. Crack time estimates in hundreds of years were the best guess based on the computing power of the day – mainframes and mini-computers. For every measure, there is a countermeasure, and it only took two decades for Shor’s algorithm to emerge, theoretically rendering both RSA and elliptic curve cryptography vulnerable. In practice, Shor’s algorithm would need to run on a much more powerful computer to crack encryption in a reasonable time. Despite processing power advances along Moore’s law, RSA cryptography has remained safely beyond cracking.

Quantum computing changes the curve with an exponential increase in computational power as the number of qubits scales. Soon, quantum computers could offer enough operations per second to cut crack times dramatically for classical encryption methods. That should not be a surprise – classical encryption algorithms remain fixed while computing power grows yearly, which means new algorithms will be needed if encryption is to stay safe.

NIST has pursued PQC algorithms since 2016, announcing its first round of selections in July 2022. From those selections, the NSA issued its PQC recommendations in the Commercial National Security Algorithm Suite 2.0 (CNSA Suite 2.0) with timelines for modernizing six classes of systems and a target of having all systems PQC-enabled by 2033.

With the NSA’s initial software/firmware signing and cloud services goals looming in 2025, developers need to get moving with PQC technology and IP, forcing the discussion from theory to practice. Agencies in Europe – including France’s National Cybersecurity Agency (ANSSI) and Germany’s Federal Office for Information Security (BSI) – and Asia have issued similar timelines for approaching the PQC transition.

Projecting PQC theory into practical implementations

Secure-IC devotes the balance of its white paper to practical implementation challenges. High on the list is performance, particularly embedded device performance, as many more devices connect to the internet and must encrypt and decrypt traffic for security. Also on the list is hybridization, where classical and PQC algorithms exist in systems simultaneously. Another point is the existence of new cryptographic primitives in PQC and the associated concerns with design, integration, licensing, and interoperability. Their last point is certifications, where industry and regional differences complicate the landscape and usually mean addressing multiple certification efforts to field a product in various applications and markets.

In developing its PQC-ready technologies, Secure-IC created a hardware accelerator and software library that delivers a complete solution to address these challenges. Their hardware architecture manages impacts on power, performance, and area (PPA) for enabling embedded devices with PQC. Their software provides configurable modules for both classical and post-quantum algorithms. Secure-IC’s solutions have achieved several certifications, including those for the automotive industry.

To download a copy of the white paper and see how Secure-IC solutions face the challenges and help developers safeguard digital assets, please visit the Secure-IC website:

Redefining Security – Confronting the Industrial Challenges of Implementing Post-Quantum Cryptography (PQC)


Breker Brings RISC-V Verification to the Next Level #61DAC

Breker Brings RISC-V Verification to the Next Level #61DAC
by Mike Gianfagna on 07-09-2024 at 6:00 am

DAC Roundup – Breker Brings RISC V Verification to the Next Level

RISC-V is clearly gaining momentum across many applications. That was quite clear at #61DAC as well. Breker Verification Systems solves challenges across the functional verification process for large, complex semiconductors. Its Trek family of products is production-proven at many leading semiconductor companies worldwide. So, it seems logical that Breker brings RISC-V verification to the next level and that’s exactly what the company did at #61DAC.

The highlight of Breker’s presence at the show includes:

  • A complete range of tests for the entire RISC-V core verification stack from ISA to system-level interaction and performance.
  • Test Suite Synthesis AI Technology to track complex, unpredictable bugs and accelerate coverage of complex, super-scalar, out-of-order microarchitecture pipeline implementations
  • Self-checking content that is portable across simulation, emulation, and post silicon with debug and coverage analysis

Let’s look at how Breker brings RISC-V verification to the next level.

RISC-V Automated Core Verification with Synthesis Amplification

Common RISC V Verification Stack

The verification of a RISC-V processor core should include a “stack” of scenarios as shown in the figure. Breker’s RISC-V CoreAssurance SystemVIP uniquely provides this complete scenario range. A complete range of tests for the entire RISC-V core verification stack is provided. Starting with randomized instruction generation and microarchitectural scenarios, unique tests are provided that check all integrity levels, ensuring the smooth application of the core into an SoC.

This can also be extended to allow custom RISC-V instructions to be fully incorporated into the complete test suite. The capability may be ported across simulation, emulation, prototyping, post-silicon, and virtual platform environments to complete the picture.

A capability called test suite synthesis verification amplification is also included. Most test suites are templated in nature, allowing individual tests to be configured for various design situations. Using Planning Algorithms, an AI technique, Breker’s SystemVIP is based on synthesis technology that has an amplifying effect on the scenario models to significantly improve coverage and bug hunting.

Comprehensive System Coherency Verification

Breker’s popular Cache Coherency SystemVIP is used by most of the leading semiconductor companies worldwide to find hundreds of bugs over many complex SoCs. As the complexity of SoCs increases, so does the requirement for system level coherency that includes fabric and I/O, as well as advanced memory architectures.

Breker addresses these challenges with its next generation System Coherency SystemVIP, leveraging Test Suite Synthesis to generate a broad range of coherency tests. These tests are based on multiple verification algorithms and may be easily configured to operate on all memory and fabric architectures across multicore platforms. The synthesis platform includes AI planning algorithms, cross combination and concurrent scheduling for high-coverage, and complex corner-case evaluation.

As more complex RISC-V multi-cores and systems are produced, coherency for these designs is increasing in importance. Breker’s coherency SystemVIP works hand-in-hand with its other RISC-V SystemVIPs to enable a complete solution for the most advanced designs.

The SystemVIP can generate both C code and transactions for SoC testbenches, or UVM sequences for cache unit and sub-system simulation. It can operate on a virtual prototype, simulation, emulation, FPGA prototype and even actual silicon platforms, and includes full debug and profiling of the device under test on those platforms.

Breker’s Test Suite Synthesis has been shown to produce dramatic improvements in test composition time and coverage over and above basic test generators, including typical templating test schemes. The figure below provides an overview of the platform.

Platform Overview

The CEO Perspective

Dave Kelf

I had the opportunity to catch up with my good friend and CEO at Breker, Dave Kelf. I wanted to get his perspective on RISC-V market and the impact these new innovations from Breker are having. Here’s what Dave had to say:

While RISC-V represents a huge discontinuity across the electronic industry, there is a quality expectation that has been set by companies such as Arm that RISC-V cores must meet to be successful. This requires in-depth, comprehensive verification, and the best way to meet at least part of this need is to reuse test suites that are already proven.

RISC-V verification has its unique challenges, and these are compounding as the cores get more advanced. Existing, templated tests are fine for basic embedded cores, but run out of steam for the types of devices that are now emerging. We need to apply synthesis techniques to tease out deep sequential, unpredictable bugs, implement performance-based testing and enable system-level integration verification, and this accounts for the demand explosion we have seen at Breker.

To Learn More

You can learn more about RISC-V automated core verification with synthesis amplification here and you can learn more about comprehensive system coherency verification here.  And that’s how Breker brings RISC-V verification to the next level at #61DAC.


Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC
by Mike Gianfagna on 07-08-2024 at 10:00 am

DAC Roundup – Intel’s Gary Patton Shows the Way to a Systems Foundry

#61DAC was buzzing this year with talk of AI and multi-die, heterogeneous design. The promise of making 2.5/3D design and a chiplet ecosystem mainstream reality was the focus of a lot of the panels and presentations at the conference. AI is certainly a driver for this new design style, but the conversation was broader than just AI, as you will see. This new design style will require effort from every part of the semiconductor ecosystem, and this focus was on display during DAC. There is a focal point where all this work needs to come together to make it commercially available. That focal point is the foundry, and there was a keynote address on Tuesday morning at DAC that did a great job explaining how to open the door to the future. Let’s explore how Intel’s Gary Patton shows the way to a systems foundry.

What a Systems Foundry Is and Why It Matters

Before I get into Gary’s keynote, I’d like to address the elephant in the room. I’ve been in the semiconductor business for a very long time. Over the years, I’ve known Intel as a technology powerhouse that dominates markets, crushes the competition and does things the Intel Way.

Open, collaborative, ecosystem-focused and service-oriented weren’t necessarily the first things I would think of when I heard “Intel”. But that’s exactly the presentation delivered by Dr. Gary Patton during his keynote address. Intel is clearly changing, and in a big way. With its systems foundry initiative, Intel is taking a leadership role in defining the future of semiconductor design and manufacturing. This role requires a new type of culture, and Gary is one of the Intel executives that is leading way. I had a chance to speak 1:1 with Gary at DAC, and I’ll share some of his personal insights in a moment. But first, let’s look at some of the messages from his keynote.

Gary began with some eye-opening statistics. According to IDC, the world creates nearly 270,000 petabytes of data every day. That’s 270,000,000,000 gigabytes. Intel estimates that by 2030, 1 petaflop of compute and 1 petabyte of data will be less than 1 millisecond away from the average user. Enabling these achievements will require disruptive innovation – innovation that clearly goes beyond the Moore’s Law scaling we’ve come to rely upon for so long.

He also mentioned that while AI is contributing to this huge growth in data volume and data processing requirements, it also presents significant energy efficiency challenges. According to the NY Times and Google, AI could soon need as much electricity as an entire country (~100 terawatt-hours/year).

Gary pointed out that disruptive innovation is nothing new to our industry. Over the years, we’ve conquered the bipolar power limit, gate oxide limit, and now the planar device limit. Conquering this last one will require a combination of chip and chiplet implementation as well as package interconnect density and energy efficiency. Intel aims to be at the epicenter of all these innovations and that’s what its Systems Foundry initiative is all about.

Thanks to its advanced packaging work, Intel is on track to deliver a 50X improvement in energy efficiency and a 10,000X improvement in interconnect density, as shown in the figure below.

Intel Packaging Innovation

Gary looked beyond Intel’s innovations for the complete picture. He discussed the work of UCIe, a consortium of 135 companies. The stated goal of this effort is to develop an open specification that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level. Gary explained that the work of UCIe is delivering two orders of magnitude improvement in energy efficiency and three to four orders of magnitude improvement in bandwidth when compared with the standard package in the lower left of the figure above. These packaging improvements also deliver at least one order of magnitude lower latency than external interconnects like PCIe, Ethernet etc.  This is important work that Intel Foundry is clearly supporting.

Gary then discussed the importance of system technology co-optimization, a much broader and more ambitious version of design technology co-optimization. He explained that software & architecture, packaging, and silicon are all part of this effort which must be holistic. He stated that, “progress at individual layers in the stack is necessary but not sufficient. The entire system must be co-optimized. “

While much of the advanced process and packaging work at Intel is fueling this effort, close collaboration with the entire IP, EDA, design services, and advanced system assembly and test ecosystem is also critical for success. He described in detail the many programs that Intel Foundry has underway with its ecosystem partners to build and certify next-generation design and manufacturing capabilities.  He described regular meetings with all key EDA suppliers and showed very detailed scorecards of EDA certifications across all key Intel technologies. The breadth of this effort is truly impressive.  Coming a bit later in this post is more proof of Intel’s commitment to an open design flow.

Gary described the five-year investment Intel has made to deliver a systems foundry capability. He reported that today the company has over 100 2.5D designs in manufacturing. Design enablement, an open and collaborative attitude with a quality-first culture and strong customer support and certified methodologies are all part of this investment as shown in the figure below.

Intel Investment

The chart above really drove it home for me. This is very much a new and improved version of Intel. One that maintains its technology strengths but adds all the elements of a leading, world-class foundry to create a systems foundry. Next, let’s get to know the presenter of the keynote.

Leading Change – Gary Patton’s Perspective

Gary Patton

I was fortunate to have some private time with Gary after his keynote at DAC. Gary is one of the many “outsiders” that Intel has hired over the past few years – that five-year investment that is summarized above. I believe Gary’s entire career prepared him for his current work at Intel. After receiving his Ph.D. in EE from Stanford, he spent over 25 years in various leadership roles at IBM, in research, microelectronics and various corporate initiatives and product lines. Throughout this time, he honed his skills in product/technology development as well as ecosystem collaboration.

He then spent 4.5 years at Globalfoundries as chief technology officer and senior vice president of Worldwide R&D and Design Enablement. He has now been at Intel for 4.5 years as corporate vice president and general manager, Foundry Design Enablement. He is one of the many recent hires at Intel who bring broad industry experience to the company. 

Gary explained that he has always had a great respect for the accomplishments of Intel. He came to the company not to “fix” anything, but rather to take a great company to the next level. It seems to have worked out well. He credits the past 4.5 years as the best time in his career. When you consider all the things he’s accomplished, that’s saying a lot.

Gary talked about a corporate-wide shift at Intel to address the broader challenges and opportunities ahead.  Tone at the top is an important part of this and Pat Gelsinger is exactly the right person to convey those messages. Gary is delightful to speak with. He is articulate, personable and a very effective leader. A closing comment he made sticks with me. He explained that he brought many lessons learned to Intel from his prior experiences. A key one is that, “if you’re in the foundry business, your customers will make you better.”

Proof of Intel’s Commitment to An Open Design Flow

On the first day of DAC there was more proof of Intel’s growing ecosystem and the commitment being made to create a broad set of reference flows. The following announcements were made by Intel ecosystem partners to support access to Intel’s EMIB technology:

  • Ansys is collaborating with Intel Foundry to deliver signoff verification of thermal and power integrity and mechanical reliability of Intel’s EMIB technology spanning advanced silicon process nodes to various heterogenous packaging platforms.
  • Cadence announced the availability of a complete EMIB 2.5D packaging flow, digital and custom/analog flows for Intel 18A, and design IP for Intel 18A.
  • Siemens announced the availability of an EMIB reference flow for Intel Foundry’s customers. This is in addition to their announcement of Solido™ Simulation Suite certification for custom IC verification on Intel 16, Intel 3, and Intel 18A nodes.
  • Synopsys announced the availability of its AI-driven multi-die reference flow for Intel Foundry’s EMIB advanced packaging technology, accelerating the development of multi-die designs.

Suk Lee, vice president for Ecosystem Development at Intel Foundry commented, “today’s news shows how Intel Foundry continues to combine the best of Intel with the best of our ecosystem to help our customers realize their AI systems ambitions.”

You can see the complete announcement from Intel Foundry here. You can learn more about Intel’s plans to deliver a systems foundry for the AI era here.  And that’s some backstory about how Intel’s Gary Patton shows the way to a systems foundry.  #61DAC


My Experience #61DAC

My Experience #61DAC
by Daniel Nenni on 07-08-2024 at 6:00 am

Needham DAC

The theme of this year’s DAC was Chips to Systems which is a full circle type of thing since systems companies used to make their own chips. Old school computer companies were the biggest chip makers when I started  in the semiconductor industry. IDMs like Motorola and Intel replaced them at the chip level. Shortly after I joined the industry a start-up company (Sun Microsystems)  put HPC on our desktops with the slogan “The Network is the Computer” and changed computing forever.

Following Apple, other systems companies took control of their silicon with the likes of Tesla, Google, Amazon, Microsoft, and many others who make chips for internal use only. More than half of the traffic on SemiWiki is now from systems companies which is a big shift to the left.

So, Chips to Systems is a good DAC theme for sure but AI was the most referred to acronym at the conference. We do love our acronyms, absolutely.

I don’t know the official attendance numbers but I would bet #61DAC traffic was much higher than last year. However, given that we’ve had 21 consecutive quarters of positive growth for total EDA revenue, you would think DAC attendance would be much higher. If DAC was held at the San Jose Convention Center, which it never has been, I would expect the attendance numbers to double, my opinion.

There was certainly a different mix of companies on the exhibit floor. Some of the large EDA companies are less supportive of DAC but many new companies have replaced them which is a very good thing.

The big take-away from DAC for me this year is the depth of experience inside the ecosystem. I asked just about everyone I spoke with when their fist DAC was thinking I would win since mine was 1984 in Albuquerque NM. The winner was 1978 back in the RCA electronics days before there were exhibits.

DAC opened with the usual Sunday night networking party and the opening keynote from Charles Shi from Needham. Charles is the best qualified analyst I know. He has a Ph.D. in Material Science plus an MBA from UC Berkeley and he spent 5 years at Applied Materials before switching sides to analyst. Charles is very approachable and EDA is VERY lucky to have him. I would suggest to the DAC Committee that Charles speak first thing Monday morning in the DAC Pavilion for all to see.

Charles rightly pointed out that Nvidia really is the only company that has cashed in on the AI surge thus far which echo’s one of my concerns that the AI infrastructure spend is by far outrunning actual AI profits.  This points to a bubble. If so, I just hope we prepare and have a soft landing.

Charles is right, the semiconductor industry is transforming once again, going back to where it all started. TSMC helped enable this systems company shift by integrating packaging into the foundry business and now Intel and Samsung are following suit. The interesting thing about Samsung is that they are a systems company. In fact, Samsung is one of the largest and most experienced electronic systems companies in the world. Intel has already laid claim to being a “Systems Foundry” so that is a lost opportunity for Samsung.

The foundry landscape certainly has changed since Intel re-entered the market. Samsung is now between two very big dogs eating out of the same bowl. Just like EDA back in the day. TSMC is the trusted foundry with the massive ecosystem of customers, partners, and suppliers. Intel is the Systems Foundry which is explained in our next blog: Intel’s Gary Patton Shows the Way to a Systems Foundry. Gary started at IBM in 1986 after completing his Ph.D. at Stanford. As I said, the depth of experience at DAC is amazing.

As I mentioned previously, I moderated a DAC panel on 3D IC. One of the panelists was Rob Aitken (Ph.D. from McGill University). Today Rob is Program Manager, National Advanced Packaging Manufacturing for the U.S. Department of Commerce (CHIPs Act). Previously he was at Synopsys and ARM. I first met Rob at Artisan 20 years ago, before the Arm acquisition, and many times after that. Rob’s special talent is explaining the most complex technologies in ways that even I can understand. On the panel he reduced the complexities of 3D IC down to making a sandwich…

Our #61DAC coverage will continue through this month so stay tuned…

Also Read:

LIVE WEBINAR Maximizing SoC Energy Efficiency: The Role of Realistic Workloads and Massively Parallel Power Analysis

Solido Siemens and the University of Saskatchewan

Career in EDA Versus Chip Design: Solving the Dilemma


Podcast EP234: An Update on Chips and Science Act Progress with Mike O’Brien

Podcast EP234: An Update on Chips and Science Act Progress with Mike O’Brien
by Daniel Nenni on 07-05-2024 at 10:00 am

Dan is joined by Mike O’Brien. Mike was recently the vice president of aerospace and government at Synopsys, He has 40 years of experience in the semiconductor, software and computer industries. In his 27 years in EDA and IP at Synopsys and Cadence, Mike helped build new lines of business including outsourced design services, research collaborations and a government focused vertical.

Currently, Mike is part of a team working for the US Department of Commerce that will play a key role to implement the CHIPS and Science Act’s historic investments in the semiconductor industry. He joined us in March for an overview of how the government is managing funding for manufacturing and R&D.

Mike returns to provide an update on progress and plans since his last Semiconductor Insiders podcast. He reviews details of funding work with organizations both large and small. The focus of the funding and results are discussed.

Mike also provides details about the work being done to address the talent shortage in the semiconductor industry, both from direct work with universities as well as collaboration with organizations across the ecosystem. Methods to reach across borders in the interest of a worldwide semiconductor ecosystem are also discussed.

Mike concludes with his views of what will be achieved in the coming months.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: David Heard of Infinera

CEO Interview: David Heard of Infinera
by Daniel Nenni on 07-05-2024 at 6:00 am

DavidHeard

David Heard has served as CEO and has been a member of the Board of Directors since November 2020. Mr. Heard joined Infinera in June 2017 and served as our Chief Operating Officer from October 2018 to November 2020. During his time as COO, Mr. Heard was responsible for leading the innovation of new solutions and the overall operational excellence of the company, overseeing functions including corporate development, facilities, human resources, information technology, marketing, operations, product lifecycle management, quality, research and development and services.

Mr. Heard brings a proven track record of technology industry leadership, with more than 25 years of success in the industry. Prior to Infinera, Mr. Heard served as President of Network and Service Enablement at JDS Uniphase from 2010 to 2015, and as COO at BigBand Networks (now Arris) from 2007 to 2010. Earlier roles included President and Chief Executive Officer (CEO) at Somera (now Jabil), President and General Manager, Switching Division, at Tekelec (now Oracle), President and CEO at Santera Systems, and various positions at Lucent Technologies and AT&T.

Tell us about your company?
Infinera is a U.S.-based manufacturer of optical semiconductors and high-speed connectivity solutions for communications service providers, webscalers, and various industry verticals including government, energy, and healthcare. We build, sell, and deploy optical systems and subsystems that transport large amounts of data across fiber optic networks from shorter-reach metropolitan networks through ultra-long-haul and submarine networks. Our solutions provide the backbone for the internet, cloud services, and data center interconnect, and enable services such as 5G mobility, artificial intelligence, streaming video, and high-speed broadband. As part of delivering innovative, industry-leading solutions, Infinera owns and operates a U.S.-based compound semiconductor fab as well as an advanced testing and packaging facility.

What problems are you solving?
Bandwidth demands have been growing at more than 30% per year for more than 20 years. Infinera has been instrumental in helping network operators to cost-effectively keep up with the relentless growth in bandwidth. Leveraging our unique vertically integrated capabilities, Infinera has consistently provided innovative, flexible, and scalable solutions that increase capacity per fiber while driving down cost and power per bit.

What application areas are your strongest?
Infinera specializes in cost-effective scalable optical connectivity solutions. We focus on higher-capacity solutions capable of transmitting multiple terabits of data across all network applications, from intra-data center through ultra-long haul and submarine.

 What keeps your customers up at night?
Cost-effectively keeping up with their bandwidth demands, including rapidly growing demands between data centers driven by explosive applications such as artificial intelligence. Our customers also operate in highly competitive environments, driving the need to consistently provide differentiated service offerings.

What does the competitive landscape look like and how do you differentiate?
This is a growing field that is confronting the unprecedented operational challenges associated with the impact of AI workloads. There are traditional suppliers of optical networking gear focused on addressing this problem, as well as networking companies that can leverage a whole new ecosystem of optical pluggable technology. The environment remains extremely competitive, with the importance of vertical integration and performance leadership being critical to winning customers. Solutions providers need to consistently invest and innovate to bring new technologies and solutions to market that provide incremental benefits to network operators.

What new features/technology are you working on?
We continue to leverage our unique vertical integration capabilities, including our expertise in semiconductor material sciences, to bend the laws of physics to provide the Moore’s Law of economic scalability to critical network infrastructure. Our solutions and technologies are enabled by our U.S.-based semiconductor fab and our advanced test and packaging facility. We are currently bringing to market solutions that enable transmission of 800 Gb/s in a power-efficient pluggable form factor, 1.2 Tb/s in a high-performance embedded solution, and 1.6 Tb/s in an ultra-low-power, short-reach intra-data center solution.

How do customers normally engage with your company?
Networking solutions are typically large and complex deployments. As a result, we work closely with our customers to choose the right technology, optimize network designs, and deploy networks.

Our website is https://www.infinera.com/. We can be reached through our contact page: https://www.infinera.com/contact-us/. Our LinkedIn is https://www.linkedin.com/company/infinera/.

Also Read:

CEO Interview: Dr. Matthew Putman of Nanotronics

CEO Interview: Dieter Therssen of Sigasi

CEO Interview: Dr. Nikos Zervas of CAST