You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The global semiconductor market in 2021 was $555.9 billion, according to WSTS data released by the Semiconductor Industry Association (SIA). 2021 increased 26.2% from 2020, the largest annual increase since 31.8% in 2010, eleven years ago. We at Semiconductor Intelligence track publicly available semiconductor market forecasts and award a virtual prize for the most accurate forecast for the year. The criteria are a forecast publicly released anytime between November of the prior year and the release of January data from WSTS (generally in early March). The winner for 2021 is Future Horizons with an 18% forecast released in January 2021. Malcolm Penn of Future Horizons is a perennial optimist, usually calling for higher growth rates than other forecasters. For 2021 he was closest to being right. Runner-ups are 14% forecasts by us at Semiconductor Intelligence and by Evercore LSI. Most forecasts for 2021 made prior to March 2021 were in the range of 8% to 14%.
What is the outlook for 2022? Who will win the virtual forecasting prize next year? Recent forecasts for 2022 semiconductor market growth range from 9% from WSTS to 15% from us at Semiconductor Intelligence. Future Horizons is less optimistic for 2022 than for 2021, with a 10% projection based on a downturn beginning in 4Q 2022.
Revenues of the top semiconductor companies were generally strong in 4Q 2021. Non-memory companies grew revenues 7% in 4Q 2021 versus 3Q 2021. The strongest growth was from Qualcomm at 14%, AMD at 12% and STMicroelectronics at 11%. All other non-memory companies had revenue increases ranging from 3% to 8%. The weighted average guidance for 1Q 2022 for the non-memory companies is a 1% decline from 4Q 2021. Excluding Intel, which is expecting a 6% decline, the non-memory companies expect 2% growth in 1Q 2022 versus 4Q 2021. Most companies indicate continuing supply chain problems, with overall demand still exceeding supply.
The memory companies (Samsung, SK Hynix, Micron Technology and Kioxia) collectively had a 5% decline in revenue in 4Q 2021 versus 3Q 2021. SK Hynix and Kioxia had revenue gains while Samsung and Micron had revenue declines. The memory companies generally see strong demand, but shortages of other components are hampering the production of some electronic equipment and the demand for memory in these products.
The first quarter of the year typically shows a revenue decline from the fourth quarter of the prior year. Over the last ten years, 1Q has declined from 0.5% to 15% from 4Q – except for 1Q 2021 which was up 3.8% from 4Q 2020. With many companies guiding growth in revenue in 1Q 2022, the outlook for the year 2022 is healthy. We are projecting 1Q 2022 will be flat to down slightly from 4Q 2021. Growth in 2Q 2022 through 4Q 2022 should be moderately healthy as capacity shortages continue to be worked out. Thus, we at Semiconductor Intelligence feel reasonably confident with our 15% forecast for 2022.
Looking beyond 2022, many factors come into play. Most semiconductor shortages are expected to be resolved by 2023, leading to a more balanced market. Global economies should be back on track by 2023, with either the end of the COVID-19 pandemic or the world learning to manage COVID-19 while maintaining relatively normal economic activity.
The International Monetary Fund (IMF) January forecast calls for 4.4% global GDP growth in 2022. Global GDP grew 5.9% in 2021 in a bounce back from a pandemic driven downturn in 2020. The IMF expects global GDP growth to moderate to 3.8% in 2023 – close to the long-term trend – as economies return to more normal activity. IDC projects smartphones growth will moderate from 5.3% in 2021 to the 3% to 4% range in 2022 to 2024. IDC forecasts PCs will decline 1.1% in 2022 following pandemic-driven boom growth of 14.8% in 2021. PCs are expected to grow in the 1% to 2% range in 2023 to 2024, again close to the long-term trend.
The automotive market has been hard hit by parts shortages in the last year. Statista estimated 2021 light vehicle production growth of 8% in 2021, but growth would have been higher if more parts had been available. Parts shortages should persist into 2022 with some easing by 2023. Statista projects strong growth of 9% in 2022 and 11% in 2023 as automotive suppliers respond to pent-up demand. Growth is pegged at 6% in 2024, returning closer to normal trends
IC-Insights predicts the semiconductor market will return to recent long term growth trends, with a compound annual growth of 7.1% from 2021 to 2026. At Semiconductor Intelligence, we expect 2023 semiconductor market growth in the high single digits – in line with historical trends.
Dan is joined by Jan Pantzar, VP sales and marketing of VSORA, a provider of high-performance silicon intellectual property (IP) and chip solutions for artificial intelligence, digital communications and advanced driver-assistance systems (ADAS) applications based in France.
Dan and Jan explore the VSORA architecture that uniquely combines DSP with machine learning acceleration and high-bandwidth memory-on-chip. Current and future use in applications such as self-driving cars are explored.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
We know about formal methods for cache coherence state machines. What sorts of tests are possible using dynamic coherence verification? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.
This is a slightly dated paper but is well cited in an important area of verification not widely covered. The authors’ goal is to automatically generate tests for multicore systems which will preferentially trigger consistency errors. Their focus is on homogenous designs using Gem5 as the simulator. Modeling is cycle accurate; in similar work CPU models may be fast/virtual and coherent network/caches and pipelines are modeled in RTL running on an emulator.
The method generates tests as software threads, one per CPU, each a sequence of loads, modifies, stores and barrier instructions. Initial generation is random. Tests aim to find races between threads where values differ between iterations, i.e. exhibit non-determinism. The authors argue that such cases are more likely to trigger consistency errors.
The authors then use genetic programming to combine such non-deterministic test components from sequences. From these they build new sequences to strengthen the likelihood of races which are likely to fail consistency tests. Where they find inconsistencies, they run a check to classify these as valid or invalid per the memory consistency model. They have a mechanism to measure coverage to guide the genetic algorithm and to determine when they should stop testing.
The authors describe bugs that such a system should find in a cache coherent network and their method performs well. The authors note limitations of formal methods to early and significantly abstracted models. In contrast this method is suitable for full system dynamic coherence verification.
Paul’s view
Memory consistency verification is hard, especially in pre-silicon. But it is a very important topic that we find increasingly center stage in our discussions with many customers.
The heart of this paper is a genetic algorithm for mutating randomized CPU instruction tests to make them have more and more race conditions on memory writes and reads. The authors achieve this using a clever scoring system to rank memory addresses based on how many read or write race conditions there are on that address in a given test. Read or write instructions on addresses scoring high (which they call “racey”) are targeted by the genetic algorithm for splicing and recombination with other tests to “evolve” more and more racey tests. It’s a neat idea, and it really works, which is probably why this paper is so well cited!
The authors benchmark their algorithm on a popular open-source architecture simulator, Gem5, using the Ruby memory subsystem and GARNET interconnect fabric. They are able to identify two previously unreported corner case bugs in Gem5, and also show that their clever scoring system is necessary to create tests racy enough to catch these previously unreported bugs. Also, the authors show that their algorithm is able to find all other previously reported bugs much faster than other methods.
Overall, I found this a thought-provoking paper with a lot of detail. I had to read it a few times to fully appreciate the depth of its contributions, but it was worth it!
Raúl’s view
Our October 2021 article, RTLCheck: Verifying the Memory Consistency of RTL Designs addressed memory consistency by generating assertions and then checking them. This is a different approach which modifies the well-known constrained random test pattern generation by generating the tests using a genetic algorithm with a particularly clever “selective crossover”.
They run experiments for the x86-64 ISA running Linux across 8 cores. The cores they model as simple out-of-order processors, with L1 and L2 caches. Tests run in 512MB of memory with either 1KB or 8KB of address range. Each test runs 10 times. Three experiments run: pseudo randomly generated tests, genetic without selective crossover, and genetic with selective crossover. The evaluation detected 11 bugs, 9 already known and 2 newly discovered. Within 24 hours only the full algorithm (at 8KB) finds all 11 bugs, the other approaches find 5-9 bugs. The full algorithm also beats other approaches in terms of coverage for MESI and TSO-CC (cache-coherence protocols). However this is by a small margin.
The paper is highly instructive although a challenging read unless you’re an expert in MCM. The authors provide their software in GitHub, which no doubt encouraged subsequent papers which cite this work 😀. Given enough expertise, this is certainly one arrow in the quiver to tackle full system memory consistency verification!
My view
As Raúl says, this is a challenging read. I have trimmed out many important details such as memory range and stride. Also I skipped runtime optimization, simply to keep this short summary easily digestible. Methods like this have an objective to automatically concentrate coherency suspects in a relatively bounded testplan. I believe this will be essential to dynamic coherence verification methods to catch long-cycle coherence problems.
-Intel jumpstarts foundry model with Tower semi buy @$54B Gets complementary tech to round out offerings
-Approval based on satisfying China’s needs as well
-Margin concerns overblown- Will it be allowed to flourish?
Intel pays up to get foundry and technology….
Intel announced a $5.4B acquisition of Tower Semiconductor in Israel. This amounts to $53 per share for a company whose last trade was $33 per share or a whopping 60% premium.
Intel is getting a very well run company that has been a foundry for a long time as well as technology that it currently does not offer that will add to a broader foundry offering other than just bulk CMOS.
In addition the company gets more China business as well as military business both of which are needed to support Intel’s strategic direction. In essence this deal covers multiple birds with one stone which likely accounts for the premium.
Margin concern is overblown and wrong way to look at it
On the conference call a number of analysts seemed to be negative on the acquisition due to Towers lower margins. Most any company acquired by Intel is going to have lower margins and being a smaller player of older technology in the land of giant TSMC is not easy on margins.
We don’t think Intel is buying the business for its margins or financials or as a “bolt on” but obviously for its current positioning as a foundry with complementary technology to Intel. If Tower can help Intel’s core become a more competent foundry than that is more than worth the acquisition price alone.
Will Intel let it succeed?
The main reason Intel previously failed in the foundry business was that the upstart foundry group was smothered by the larger IDM corporate mentality. The current foundry effort is relatively weak as Randhir Thakur has never run a foundry or anything close. Our hope is that the management of Tower becomes the more dominant partner rather than being suppressed. In this way Intel can try to get what it paid for… expertise.
Corporate antibodies are strong in this one and Pat Gelsinger may have to take steps to make it so.
We like Tower and its management
We view Tower as a tough, smart, scrappy company in the model of many other Israeli companies. It has been cobbled together over the years out of a string of deals and acquisitions for little to no money that has followed a much different model.
We have known the CEO, Russell Ellwanger since his days at Applied Materials and think he has done a great job. If Tower’s mentality and methods can be set loose in a much larger company with large resources they might get a lot done.
A limited commodity
There aren’t that many foundries in the world that Intel could buy that would potentially make a difference and make sense.
We view Tower as a somewhat smaller but better, and more profitable version of Global Foundries. Global was cobbled together through a number of acquisitions and stumbled badly in mainstream CMOS and is now also trying to avoid the big dogs (like TSMC) by doing “specialty” silicon. AbuDhabi poured a ton of money into GloFo which is barely break even where Tower was more of a bootstrap operation that is solidly profitable. We think that Intel might have been interested in GloFo until its IPO priced it out of that possibility (but Intel could get a second chance)
Could China have been in the mix?
We would certainly have not been surprised if China was taking a hard look at Tower. China has been sniffing around a number Israeli based technology companies because of its concern about being cut off from US technology as well as CFIUS restrictions on foreign investment/acquisition which makes Israeli acquisitions much easier.
The very high premium could also be signs of a behind the scenes bidding war or wishing to avoid someone else (like China) coming in to outbid Intel after the deal was announced.
Technology is needed and complementary
The technology side should be a no brainer for Intel….just let it run and don’t screw with it. Most of the processes that Tower runs harken back to Gelsingers earlier stint at Intel and most of those Intel people took the buy out long ago, so Tower knows the tech better than Intel does.
Intel can likely help with some older equipment and fabs it has lying around and Tower could easily contribute to equipment reuse. Tower could likely get a ton of benefit from what Intel has written off.
The stock
While we view this deal very positively and a step in the right direction, it has to be approved first, which will likely take at least a full year and then be integrated (or not).
For Intel we think its a great trade to sell off the crummy memory business and get a foundry business in trade…it just makes sense.
We don’t see this as a reason however to run out and buy the stock of Intel as the impact will take years but we do see it supporting the longer term strategy.
We will likely hear more at Intel’s investor day this Thursday… In the presentation “Goliath buys David”.
About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.
About Semiwatch
Semiconductor Advisors provides this subscription based research newsletter, Semiwatch, about the semiconductor and semiconductor equipment industries. We also provide custom research and expert consulting services for both investors and industry participants on a wide range of topics from financial to technology and tactical to strategic projects. Please contact us for these services as well as for a subscription to Semiwatch.
Exciting times for the semiconductor industry! Last week Intel announced a billion dollar fund to build a foundry ecosystem and today Intel announced they are acquiring foundry Tower Semiconductor for $5.6 billion dollars, WOW! Some people doubted Intel’s commitment to the foundry market this time. I think we can now put that to rest. Intel is in the foundry business, period.
First let’s talk about the billion dollar ecosystem investment:
How It Works: As a key part of its IDM 2.0 strategy, Intel recently established IFS to help meet the growing global demand for advanced semiconductor manufacturing. In addition to providing leading-edge packaging and process technology and committed capacity in the U.S. and Europe, IFS is positioned to offer the foundry industry’s broadest portfolio of differentiated IP, including all of the leading ISAs.
A robust ecosystem is critical to helping foundry customers bring their designs to life using IFS technologies. The new innovation fund was created to strengthen the ecosystem in three ways:
Equity investments in disruptive startups.
Strategic investments to accelerate partner scale-up.
Ecosystem investments to develop disruptive capabilities supporting IFS customers.
Who’s Involved: The IFS Accelerator features innovative partner companies across each of the three pillars of the program:
EDA is an easy one for Intel as they are the biggest consumer of EDA tools. In fact, Intel is one of Synopsys’ largest customers, if not THE largest. And since Synopsys is also the largest IP vendor this relationship is critical. So, EDA is important but not a big challenge since Intel is already tight with the EDA community.
IP is a similar situation for Intel. There are already relationships with the commercial IP vendors through the TSMC based business. Remember, Intel historically uses TSMC for 20% of their production soon to be increased to more than 50% with N3 so there is a lot of TSMC based commercial IP floating around Intel already. Opening that up to Intel specific processes should not be a hard thing to do and that is where the billion dollars is going, to get that IP silicon proven on Intel processes and packaged up for foundry customers. According to the IP ecosystem, they are already gearing up to port military grade IP to Intel processes.
The big weakness I saw in the announcement is the Design Services Partners. Design services plays an important role inside the foundry ecosystem. I had once suggested that Intel buy a big ASIC company to closely partner with like TSMC and GUC, UMC and Faraday, or SMIC and VeriSilicon.
Of course, now that Intel is buying Tower Semiconductor for $5.6B that disrupts the ecosystem quite a bit:
Tower’s expertise in specialty technologies, such as radio frequency (RF), power, silicon-germanium (SiGe) and industrial sensors, extensive IP and electronic design automation (EDA) partnerships, and established foundry footprint will provide broad coverage to both Intel and Tower’s customers globally. Tower serves high-growth markets such as mobile, automotive and power. Tower operates a geographically complementary foundry presence with facilities in the U.S. and Asia serving fabless companies as well as IDMs and offers more than 2 million wafer starts per year of capacity – including growth opportunities in Texas, Israel, Italy and Japan. Tower also brings a foundry-first customer approach with an industry-leading customer support portal and IP storefront, as well as design services and capabilities.
What isn’t mentioned is military embedded systems. Tower is perfectly positioned for US military aerospace and defense contracts and I can assure you that is part of the Intel strategy. Jazz Semiconductor, formerly Rockwell, is a key part of this transaction, More leverage for CHIPs Act money, absolutely.
Jazz Semiconductor Trusted Foundry (JSTF), a wholly owned subsidiary of Tower Semiconductor Newport Beach, Inc., was created and accredited as a Category 1A Trusted Supplier by the United States Department of Defense’s (DoD’s) Defense Microelectronics Activity (DMEA) as a manufacturer of semiconductors that may be used in trusted applications. In addition, as of October 2014, JSTF has been accredited with Category 1B.
JSTF joins a small list of companies accredited by the DoD Trusted Program, established to ensure the integrity of the people and processes used to deliver national security critical microelectronic components, and administered by the DoD’s Defense Microelectronics Activity (DMEA). JSTF is proud to join the DoD Trusted Program to enable trusted access to a broad range of on-shore technologies and manufacturing capabilities.
The Intel investor call is getting ready to start. I will add more thoughts in the comments section.
On Monday, February 7, 2022, Intel Foundry Services (IFS), made a series of major announcements regarding the RISC-V architecture and ecosystem:
Intel will be joining the RISC-V International association with a Premier Membership, and Bob Brennan, Vice President of Customer Solutions Engineering for Intel Foundry Services, will be joining both the RISC-V Board of Directors and Technical Steering Committee.
Intel has created a $1B strategic innovation investment fund as a collaboration between Intel’s venture capital group and IFS to support established companies and the slew of early-stage start-ups that are looking to create disruptive technologies and to foster further build-out of the RISC-V architecture and ecosystem.
IFS launched Accelerator, a foundry ecosystem alliance with three specific areas of focus:
The IFS Accelerator – EDA Alliance includes: Ansys, Cadence, Siemens EDA and Synopsys.
The Accelerator – IP Alliance includes IP vendors as well as SoC silicon vendors, chiplet vendors, board vendors and accelerator manufacturers. RISC-V vendor members include Andes Technology and SiFive. Other partners in the IP Alliance: include AwaveIP, Analog Bits, Arm, Cadence, eMemory, M31, Silicon Creations, Synopsys and Vidatronic.
The Accelerator – Design Services Alliance includes design service providers as well as cloud service providers, and the three partners are Capgemini, Tech Mahindra and Wipro.
Finally, Intel announced that IFS is working with several cloud providers to develop an open chiplet platform for creating products with multiple types of computing processor modules aboard.
All these announcements taken together indicate just how serious Intel is in restarting their silicon foundry business and their intent to become a major player. Intel was already supporting the x86 and Arm architectures, but the addition of RISC-V indicates a serious commitment to the silicon foundry customer base and a desire to be open to new innovations going forward. In fact, Intel has already developed a RISC-V CPU as a soft processor for use in its FPGA product lines. The Nios V is based on RISC-V; the RV32IA architecture is designed for performance, with atomic extensions, 5-stage pipeline, and AXI4 interfaces. However, the current set of announcements go far beyond this soft CPU offering.
Intel’s support in these announcements of the RISC-V architecture and its evolving ecosystem further validates RISC-V as a viable counter to Arm CPUs in the marketplace and adds even more credibility to the growing momentum behind RISC-V.
The creation of an open chiplet platform is especially notable since it could mean platforms using x86, Arm and RISC-V processors all in the same product. This would be very interesting to many companies looking to accelerate their product introductions and also reduce development costs. A standard created around this concept would go far in influencing future production commitments from prospective customers looking for foundry capacity and wafer starts. Since Intel would be responsible for developing such a standard, they would likely figure prominently in the decisions reached on where to engage such technologies.
The four RISC-V IP vendors partnering with IFS – Andes Technology, Esperanto Technology, SiFive and Ventana Micro Systems – are all poised to benefit from this announcement. Their IP will be used by future IFS customers to optimize RISC-V CPU cores, chiplets and packaged products. In addition, they would likely gain access to Intel foundry capacity for their own silicon products at advanced nodes at some future point, although the current announcement does not discuss this.
All in all, this series of announcements, adding 16 founding companies to the IFS Alliance, paint a picture of a company – Intel – that is looking to establish a serious presence in the silicon foundry business, to once again be not only a market leader (they are currently #2 behind Samsung in the 2021 revenue rankings), but to also resume their position as a thought leader in our industry. This is reminiscent of the Intel from the 1980s and 90s when Intel was driving the industry with new products and concepts. Truthfully, much of this was initially driven by the rise of the PC market and then later the rise of the internet, but there was no shortage of innovative ideas and concepts.
Today, the semiconductor market is long past being solely driven by the PC market and the internet, but new horizons beckon with AI, 5G, IoT, IIoT and automotive all poised to become the new drivers of our industry. Intel seems poised to assume a leadership position once again, and their recent commitments in terms of new fabs and capital expenditures underline these commitments. Adding support for the emerging RISC-V architecture and ecosystem back up these commitments in a very meaningful way!
However, there are other areas of the semiconductor market that are impacted by these announcements beyond the silicon foundry market.
Implications and Possibilities
The EDA Market
IFS named several EDA vendors to their Accelerator EDA Alliance – Ansys, Cadence, Siemens EDA and Synopsys. Each of these companies has a range of design tools that can be applied to silicon designs using the RISC-V architecture. Semico expects these companies will see an increased usage for their tools now that Intel is seeking new customers for its foundry services.
An added benefit to these companies is having early access to process and advanced IC packaging roadmaps, process design kits (PDKs) and technical training. This allows the EDA vendors’ R&D teams to fine-tune EDA tools and IP for the Intel portfolio of process and packaging technologies so customers can meet power, performance and area requirements.
SoC Design
SoC designs are growing around 5% – 6% per year – and this in an environment where wafer starts are constrained due to high demand. With Intel’s signal of investing in several new fabs over the next few years, and with TSMC and Samsung also committing to higher capex expenditures over the short term, Semico believes the industry can expect to see more companies undertake new design activity. Although design starts are not tied directly to wafer capacity, it does help to have a reasonable expectation that your just-completed design will find a home at one of the leading silicon foundries and have a hope of being produced in a reasonable timeframe!
The SoC Market
Creation of an open standard around a chiplet platform that embodies heterogenous CPU architectures offers the possibility of establishing new criteria for performance and flexibility in SoC silicon solutions. The emerging chiplet market already has a great deal of interest in it and momentum behind it, and the commitment to drive its evolution further will create even more support for it.
The SIP Market
More designs and more SoC silicon mean more IP being created and licensed. The chiplet phenomena is going to be one of the main drivers of the growth in this market in the coming years. With any SoC solution, IP is used in its creation, and chiplets will be no exception. As we have seen previously, designs that start out small will grow in performance and complexity over time due to evolving market requirements, requiring even more IP to accomplish. The prospect of heterogenous CPU designs based on the chiplet standard and platform Intel is looking to create could enable another uptick in IP market growth and revenue generation. Increasing complexity and performance levels usually mean more IP usage and the IP market is well-positioned to deliver on those requirements.
RISC-V International
We would be remiss if we didn’t also mention the great job the RISC-V International association has done in shepherding and guiding the organization to this point. By their steady guidance, they have enabled a new CPU architecture to take center stage in an amazingly short period of time. Their leadership should be commended for its vision and service to the semiconductor industry. Good job everyone!
The Other Shoe
“Synchronicity – describes circumstances that appear meaningfully related yet lack a causal connection.” C.G Jung. Or, nothing ever happens in a vacuum!
On Tuesday, February 8, 2022, Softbank and NVIDIA officially terminated their agreement for NVIDIA to acquire Arm. Softbank will revisit the plan to IPO Arm as a US-headquartered company set to occur sometime within their 2023 fiscal year, ending March 31, 2023.
Simon Segars has stepped down as Arm CEO and will be replaced by Rene Haas, who will oversee Arm’s IPO. Segars will “support the leadership transition in an advisory role for Arm.”
In the initial deal execution of the Purchase Agreement between SoftBank and NVIDIA, NVIDIA paid Softbank a total of $2.0B. Here is a direct quote from SoftBank’s 2021 annual report.
“Upon the execution of the Purchase Agreement on September 13, 2020, SBGC and Arm received cash totaling $2.0 billion. Of this amount, $1.25 billion was received as a deposit for part of the consideration in the Transaction (refundable to NVIDIA subject to certain conditions until the closing of the Transaction, after which such amount will become non-refundable) and $0.75 billion was received by Arm as consideration for a license agreement that Arm and NVIDIA entered into concurrently with the execution of the Purchase Agreement.”
The official ending of the purchase agreement is a recognition of the regulatory headwinds that were aligned against the deal and signal it is time to move to the next phase of this story – the successful IPO of Arm as an independent entity once again.
In this context, the ‘other shoe’ refers to the Intel announcements and now the Softbank-NVIDIA announcement in conjunction with each other. As far as I can see, no one has yet commented on these two events taken together. While coincidences do happen, I suspect many people will take these two events as being tied together; they very well may be, although they don’t appear to be on the surface.
Could Intel have orchestrated these announcements at this time to take advantage of the difficulties being encountered by Arm and NVIDIA? It is possible, but unlikely. The deal was either going to gain approval or not based on its own merits, and the timing of these announcements, while very interesting, is probably not connected to the approval process.
Arm–NVIDIA
Now that the acquisition is officially terminated, what is next for Arm and NVIDIA? Arm has received the $750M from NVIDIA presumably to account for licensing all of Arm’s IP in a long-term licensing arrangement and to partially offset the costs of collaboration before the two companies became one. Now that this is no longer happening, what are the likely next steps for both?
One of the arguments Arm put forth in favor of approval was the thought that if Softbank tried to pursue an IPO for Arm, Softbank would not be able to attain the desired revenue from the IPO.
Arm would not be able to attract the level of investment they feel is necessary to fund their future R&D efforts as an independent company.
While both these statements may be true to a degree, it seems highly unlikely that an IPO of Arm would not generate a reasonable amount of revenue. Would this be enough to satisfy Softbank? They did originally acquire Arm for $32B and would definitely want to recoup at least that sum again. The question is: can they?
Arm is the largest IP vendor today by far, and one could assume that any IPO would take that into account. Both the SoC market and the IP market are growing nicely, and as outlined above, the Intel chiplet initiative will likely add to that growth. It is likely that Softbank would garner a reasonable amount of cash from the IPO—although probably not the $32B+ they are looking for—and they could take the rest in stock in the new company. Over the long term, this would provide them the amount of money they think Arm is worth—it would just take longer than Softbank would want.
Is it unlikely that Arm, in its IPO, would not be able to generate adequate amounts of capital to fund their R&D efforts? This is more problematic. When Arm was a standalone company, they did have this problem to some degree. But that was several years ago, when Wall Street was not as in tune with the SoC, IP and AI markets as it is today. Things have changed with the rise of AI, which Arm is at the heart of with their Neoverse CPU architecture. They now have a competitor who is about to become more capable with new investments to its ecosystem. Arm will have its work cut out for it, but the company is still the market leader by a wide margin!
With the termination of the acquisition, NVIDIA’s efforts to commercialize their IP into licensable form using ARM’s expertise is going to be more difficult, but not impossible. Their efforts to move their GPU architecture SIP into SoC silicon and obtain access to some of the markets Arm serves needs to be intelligently orchestrated and managed.
Can NVIDIA still successfully commercialize its GPU as licensable IP? It is hard to believe they will be unable to do so. They could always commission Arm to do the commercialization and not need to pay $40B+ for the privilege. They already have a very tight partnership with Arm. Why not give them another $1B–$2B to complete the task if NVIDIA truly believes this is a must-have for their business going forward? It will be more difficult now, but still mostly doable.
This brings up an interesting point: Arm and NVIDIA have been working on this collaboration for approximately 18 months—how close were they to completed? Even if not fully finished, now that the deal is over, some of the IP being worked on must be close to completion even if it still needs some work. What happens to that IP? I’m sure they can’t simply abandon it – too much money and time invested. Once it is finally completed, we could see a cross-licensing deal between the two companies which would still get the NVIDIA IP into the market with some sort of revenue sharing arrangement. After all, one of the reasons NVIDIA wanted Arm was also to gain their marketing prowess and contacts to market the IP. That can still be done, but as a partnership, not as an acquisition.
It is likely that a larger loss to NVIDIA would be the lack of access to Arm’s markets—the ones that NVIDIA cannot access today. NVIDIA was looking for Arm’s expertise to take the NVIDIA GPU architecture and pare the power requirements down to something more palatable to use in the mobile market. This is probably still possible, but needs to be carefully done. Not every application requires hundreds of GPU cores, and a judicious scaling of the architecture will probably yield results that will fit into most applications as IP. Also, now that Arm and NVIDIA will remain separate companies, and NVIDIA does not absolutely need to exclusively use Arm, the door is partially open to them using one of the RISC-V CPU cores that are coming onto the market. At the end of the day, its all about fitting the right solution to the right application and a RISC-V CPU core might be one of those solutions in the future.
Conclusions
With these announcements, Intel has acted both strategically and tactically:
Strategically, in aiding and empowering a competing architecture to Arm
Tactically, in lining up 16 companies to support its foundry initiative
Intel satisfied many needs all at one time and have adroitly given themselves momentum for reentrance into the silicon foundry market. Now they need to execute on this vision. Given how they have handled the process so far, Semico thinks it likely they can do so!
There’s a quiet upheaval happening in the semiconductor industry. The rules that have always governed the industry are fraying, undoing assumptions that we took for granted, that was pounded into us in school. The irreproachable Moore’s Law, that exponential progress will make things cheaper, better, and faster over time, is dead.
People are starting to appreciate that making a chip is not easy. Shortages and the geopolitical concentration of TSMC and ASML have awakened the popular imagination and have highlighted the science-fiction-like process of chipmaking. The road ahead has obstacles that aren’t widely appreciated. Making a semiconductor is going to get even harder, more expensive, and more technical. In other words, the challengesare going to accelerate.
To operate in the future, chipmakers will need more scale, more talent, and more money. I’ve written about this before, but I want to dive deeper into what’s driving the rising costs of making a semiconductor. It affects the entire range of chips, from the most advanced chips to the most basic. The trend is not new, it’s already been happening, but I believe now it will start to pick up speed. And the price increases will likely impact every person on earth. This inflationary cost is not transitory.
To understand how we got here, I want to first refresh you on the death of Moore’s Law. We’ve topped out the growth in transistor-energy scaling, frequency scaling, and we’re starting to hit the end of multi-core scaling in transistor-density increases. But more important than the end of those trends, cost scaling has ended. While we continue to improve transistor density through new techniques, each one layers additional costs.
ASML, in its investor day, made a bold statement that Moore’s Law will continue with system-level scaling. Another name for this is advanced packaging. But these costs are additive to the already escalating costs of making a smaller transistor.
While I believe Advanced Packaging is going to solve the transistor-density problem, I don’t believe it will make chips cheaper. In fact, transistors per dollar have gotten more expensive since the early 2010s.
I want to focus not just on the technological headwinds, but the cost headwinds. One of the major historical assumptions of Moore’s Law is that not only would your transistors double every two years, but the cost of the transistors would decline. No longer. The chart below is from Marvell’s 2020 investor day. The bar for 28nm was approximately 2011-2012.
What’s interesting is that a qualitative change happened around 28nm, as it was one of the last planar nodes. Planar in plain language is a two-dimensional surface (plane), while FinFET – the technology that replaced planar – introduced a “fin” into the transistor to jut upwards, creating a 3D structure instead of a 2D structure. We are now on the verge of yet another gate transition – gate-all-around (GAA), which is an even more 3D-intensive structure. As we switch to GAA or the next iteration of gate technology, I believe that the cost increases per 100m gates will continue to increase, just like they did for FinFET over planar. This is driven by the increased complexity of making these chips — namely, the added number of steps in manufacturing.
It isn’t just this transition that’s pushing costs higher. The lagging edge — older chips — is starting to get more expensive, too. The story here is not technological, but rather economic, and what was once ample capacity with commodity-like returns is starting to become in-demand. Businesses are not willing to add capacity unless subsequent price increases follow. This is another key driver, not just on the most advanced but in the older chips as well.
Finally, not just old chips and new chips, but the companies that make the chips (semiconductor fabs) are becoming more consolidated and more strategic. There really isn’t a lot of room at the leading edge, where the most advanced chips are made. This is the third driver of semiconductor costs, that fabs that offer a one-of-a-kind product are passing their rising costs on to customers. TSMC is not a price-taker, and the world is reliant on its products. Despite the increasing costs, they’re are starting to extract larger profits. Fabless companies have no choice but to pay more.
Each of these themes deserves a deeper dive. I’ll start first with my favorite topic: Semicap – or the tools that are required to make a chip.
Industry Consensus: Semicap Cost Intensity Will Go Up
One of the universal themes this earnings season was the higher cost of tools to make semiconductors. The real warning shot was this slide that Tokyo Electron presented at its investor day.
The drastic price increase is broad-based and is across DRAM, NAND, and Logic. Given that 5nm is in production today, this is not a prediction, but a trend that’s going to continue. The primary driver is not only the rising costs of tools such as EUV, but the rising number of steps to make a chip. Below is a graphic that shows the increase of steps over time.
It’s just not Tokyo Electron making the call for higher intensity alone. This most recent earnings season, TSMC, Lam Research, KLAC, and other Semicap companies called out rising intensity. I think all else being equal, the cost of 100K wafer starts should start to rise low- to mid-single digits per chip at the leading edge. Another way to look at this is from the top-down perspective. I compared the total capacity shipped in Million Square Inches (MSI) and compared this to wafer-fab equipment growth. Think of this as total volume versus the spending to make more wafers.
Excuse the 5-year averages, the numbers year to year are much bumpier. The workbook is attached at the end of this post, by the way.
The relationship used to be that spending on new fab equipment under paced the expansion of MSI, as each tool purchased would give more capacity. I want to highlight that if you squint, you can see that the relationship of WFE and MSI seemed to have flipped somewhere around 2012. This coincides with the 28nm node or the crucial node where cost increases started. Going forward I think that we should expect WFE to increase faster than historical averages.
From 2012 to 2020, MSI increased at a compound annual growth rate of 3.6%, and WFE increased at a compound annual growth rate of 8.0%. This is roughly double capacity additions. If demand for semiconductor devices continues to increase, then MSI should grow faster, and thus WFE at a higher multiplier. I am unsure if WFE’s relationship holds at 2x of MSI growth, but it should be a multiplier higher – say, 1.5x-2x MSI. That’s acceleration compared to its historical rate, and what’s driving that is the demand for semiconductors and the technological headwinds we’re facing.
I want to move now to another trend that got called out repeatedly this last earnings season that frankly surprised me but is surprisingly logical: old chips are more expensive.
Old Chips Cost More
It’s not just the newest, fastest, and most expensive chips that are costing more because of technological problems. The most interesting trend recently is that old chips are starting to drive price increases. The recent inflection in Automotive semiconductors is driving demand, not for the latest and greatest, but older and more mature technologies. The problem is that there never has been meaningful capacity added for older technologies, and most of the time fabs would just become “hand-me-downs” as the leading edge pushed forward and the fab equipment would continue to be used and depreciated. Using fully depreciated fab equipment meaningfully lowered the cost to make a semiconductor, especially after 10+ years.
That’s been an important aspect of pricing. A fab being maintained without much incremental capital, yet still producing chips, is what has driven down the price of older chips so much over time. Often there would even be improvements in yield, which further lowered costs. The thinking went that a leading-edge chip that cost hundreds of dollars in 2000 would cost pennies in 2021 because the fab would be fully depreciated.
But there’s a problem with that. We have a situation that’s never happened before. Historically, demand was clustered toward the leading edge. Today, we have demanded at the lagging edge, too. In fact, the demand for older chips is starting to rise sharply. This is driven by automotive and IoT production, as most of these applications are older, more mature technologies, which have better yield, cost, and, importantly, reliability. And even though demand has spiked, no capacity has been added. It’s very rare to add to lagging-edge production.
Up until very recently, there was ample capacity at the lagging edge. It would have been unheard of to add capacity to the lagging edge. A fab was considered “full” if it was running at 80%. Now, trailing-edge fabs are running at close to 100%. Something must change.
This is one of the drivers of the automotive semiconductor shortage: higher demand and no supply or incentives to add more supply. Most semiconductor firms and fabs are obsessed with the leading edge because they can make higher profits. Now firms are waking up to maintaining the lagging edge. The “old” chips are becoming just as important as new chips, and to add capacity, firms had to start making large capital additions again.
There are obvious pricing problems in this scenario. With a new fab, a company can’t turn a profit selling a lagging-edge chip at the price that was previously dictated by a fully depreciated fab. Prices have to go up. This dynamic was discussed by Silicon Lab’s CEO last quarter:
just in terms of the cost increases and the durability of that, I think that we are going into kind of a new phase of the semiconductor industry, where we’ve got Moore’s Law and advanced nodes becoming more and more expensive and you’ve got mainstream technology now full. And it used to be that the digital guys would move out and the N minus 1, N minus 2, N minus 3 nodes would be fully depreciated fabs that you would move into. We have now reached a point where the mix, the ratio between advanced and mainstream is causing fabs TSMC and others to build new mainstream technology, and that means that those fabs are not fully depreciated. So a large element of the cost increases that the industry is seeing right now is because of the additional CapEx that’s having to be put in to build new capacity across the nodes, not just at the advanced nodes, but across.
And so if you look at the cost increases that we’re seeing in other — it’s across the industry, there is a certain element of that, that’s durable over time. And so this is a step function in terms of the cost structure of the industry to match the demand that we’re seeing and the increasing content of electronics throughout the economy and the acceleration of demand that we’ve seen through the pandemic has really pushed that forward and driven us into the supply constraints. We’ll work through that, but to work through that is requiring a lot of CapEx, and that’s got to be recouped. And that’s got to flow upstream from our suppliers to us, to our customers and that’s what you’re seeing right now.
What’s great is that this wasn’t a single company saying this in a vacuum. On Semi, NXP Semiconductors, Microchip, and other “mainstream” or lagging-edge semiconductor companies affirmed that this is going to continue because demand at the lagging edge is higher than supply. This is completely new territory for the industry, and not only a confirmation of the strategic importance of semiconductors, but a confirmation of the broad-based demand. There is really only one solution — adding capacity — but fabs are uncertain they could make a profit adding greenfield lagging edge without raising prices. In turn, they’re concerned about the demand for these “new” lagging-edge chips.
It’s a standoff. Fabs and semiconductor companies are uncertain that this will last, but Auto OEMs and the like seem to have insatiable demand. For a fab, it’s hard to change your behavior in one year against a trend that has lasted decades. Even harder is to go through those difficult changes and expect customers to accept higher prices. UMC put it well in its Q2 2021 call:
But for any greenfield capacity expansion, for the mature node, you are competing with a fully — most likely, you’re going to be competing with a fully depreciated capacity. Unless the demand is significantly important to the customer, but the economics — if the economics stay the same, it will be very difficult to have a justified ROI. However, if the customer is willing to face those challenges together with us, we will definitely explore those opportunities.
Put differently, pay up or continue the shortage. And given that the lead times aren’t really improving, I think that paying up marginally will be the outcome. There has to be some kind of compromise here, and right now it’s coming in the form of non-cancelable, non-returnable orders (NCNR). Fabs are not going to expand trailing-edge capacity unless their customers truly commit to orders that cannot be canceled or returned. It’s uncertain how much double ordering is going on, but NCNRs are a pretty strong commitment to adding long-lived demand.
While the addition of capacity should eventually smooth out price increases over time, this will take multiple years, especially if the demand from automotive and IoT devices continues. I don’t think that prices will rise forever in lagging edge, but I think that we should expect the relationship of historical price decreases to not be as large as they used to be. Lastly, that leaves me with the 800-pound gorilla — Taiwan Semiconductor Manufacturing Company.
TSMC Is Not Lowering Its Prices
Tool prices for leading-edge products are increasing, and this should hurt the margins of TSMC and other leading-edge fabs. While we can gross margin contraction at Intel, this is more a function of company-specific investment to catch up to TSMC. But at TSMC, the exact opposite is happening. TSMC is convinced that it can improve its margin with smaller nodes. The company reiterated this multiple times on its calls, and given TSMC’s one-of-a-kind product, the higher gross margin is very justified. The reality is that TSMC is not a price-taker, but a price-setter. The primary reason: It has a stranglehold on all leading-edge manufacturing.
100% of sub-10nm logic in the world was made at TSMC in 2019. It’s likely that Samsung has caught up, but in terms of the leading edge, it’s just TSMC leading the pack. Given the concentration of fabrication power at TSMC, customers really don’t have another option. If a fabless company wants to sell its latest and greatest chips at a higher margin, they must go to TSMC. Meanwhile, TSMC’s gross margin has slowly drifted higher over the years.
Despite intense CapEx increases which flow through to the Cost of Goods sold as Depreciation, their gross margin is still high and the company believes it will stay that way.
TSMC put it this way on its most recent call and given its absolutely staggering outright capital expenditures, I believe it’s justified:
Even as we shoulder a greater burden of the investment for the industry, by taking such actions, we believe we can achieve a proper return that enables us to invest to support our customers’ growth and deliver long-term profitable growth with 50% and higher gross margin for our shareholders.
Right now in order to shoulder the higher CapEx burden, the company is passing on prices meaningfully. It recently hiked chip prices by 20%, and it can and will raise prices more if the continued cost of making an advanced semiconductor continues. Until the situation changes, there’s no other meaningful competition in town. If Samsung and Intel get their act together in foundry, maybe they will be willing to take lower gross margins, but until then it’s TSMC’s world we’re living in. It’s going to raise prices on the back of higher capital costs.
The Price of a Semiconductor is Rising and It’s Not Transitory
Each of these factors on its own would be a tailwind to rising prices. Together, it’s a certainty that prices are going to rise from here. Between more steps leading to higher tool costs, the declining economic benefit of Moore’s Law, and lagging-edge capacity problems, I believe that the historical cost declines we have seen in semiconductors will start to slow. In some places, I believe the price increases in the trailing edge will be here to stay. Before you go wild with inflation speculation, I want to reiterate that semiconductors are only a ~$530 billion industry or half of a percent of global GDP. This is not going to cause rampant inflation everywhere.
The demand side of the equation doesn’t seem to be slowing in the slightest. The higher demand is coming from the highest number of verticals, with PC, Mobile, Data center, Automotive, and IoT each as a credible and large growing end market. Certain applications are more compute-intensive going forward, such as machine learning. As software ate everything in our lives, silicon made the meal possible. All this is happening as Moore’s Law is screeching to a halt. In simple terms: higher demand + lower supply improvements = price goes up.
I’m going to dive into who I think benefits in the paid-only section. Thanks for reading this far.
Become a paying subscriber of Fabricated Knowledge to get access to this post and other subscriber-only content. Subscribe
Dan is again joined by Stacy Rasgon, Managing Director and Senior Analyst, U.S. Semiconductors at Bernstein Research. Stacy is an unusual semiconductor analyst as he holds a Ph.D. in chemical engineering from MIT. His substantial technical knowledge allows for a deep dive on Intel and other semiconductor topics that you will find refreshing and quite interesting.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Verific is an unusual company. They are completely dominant in what they do – providing parsers for Verilog/SV, VHDL and UPF. Yet they have no ambition to expand beyond that goal. Instead, per Michiel Ligthart (President and COO), they continue to “sharpen the saw”. This is an expression I learned in sales training, habit #7 from 7 Habits of Highly Effective People. Constantly refining and polishing (or sharpening) the tools you already have rather than launching out into building new tools. That’s a great way to keep existing customers loyal and to steadily grow a business. They are still investing in interesting development, but it is all around these core tools.
The Core Business
Michiel told me that their SystemVerilog continues very strong, as does VHDL. A blow perhaps for those who hope VHDL is approaching its end. Apparently this is partly a function of popular IPs available only in VHDL and partly a function of markets or organizations which are still VHDL-centric (think defense, aerospace, FPGA).
Speaking of markets, Verific continues strong with all the major and many of the minor EDA vendors. It also continues to enjoy popularity among semi and systems development groups in support of utility applications – glue, proprietary transforms, analytics and other functions. Michiel mentioned that demand for their UPF parser was originally low but has been picking up recently. I would guess the slow start probably reflects EDA vendors wanting to keep UPF parsing in-house for now. Recent pickup in demand then reflects designs teams extending to analysis and manipulation of UPF for their own purposes.
Michiel acknowledged that they continue to polish for corner cases (I remember this was a constant battle in Atrenta.) Verific may not be perfect (even the standards aren’t perfect) but they continue to pursue that goal as their priority.
Beyond the Basics: Cross Module References
Support for these is good example of sharpening the saw with a little new development. Cross module references (XMRs) are those references you can embed in your RTL like top.A.B.x. XMRs are essential for assertions and more generally for verification/debug. I can even imagine them being useful for quick what-if experiments within functional code, without having to go through the pain of making a change synthesis-legal.
It turns out that handling XMRs efficiently in a parser is trickier than it might appear. The logic for a simulator is easy enough, but uniquification can blow memory consumption up massively. Now Verific is finishing up an improved data structure to better support path-based uniquification which can dramatically limit this growth.
Scripting in Python
This one should attract a lot of attention – it certainly has among system clients. EDA vendors and CAD teams want to work with Verific at the C++ level, but that only makes sense when developing or extending tools that will have a wide audience and be needed for many years. Design groups have always needed utilities to manipulate design content for a smaller audience and likely smaller half-life. They need scripting capabilities and the scripting language of choice these days is Python which is nicely reflected in Verific’s INVIO add-on to its parsers (Sorry Tcl fans. You are strong in implementation, but you have no play in pure RTL manipulation.) I predict Python manipulation of SV, VHDL and UPF is likely to attract a very eager following.
In short, Michiel and team continue to sharpen the saw in interesting ways, but with a continued focus on keeping loyal customers happy above all. You can learn more about Verific HERE.
Prototyping solutions have been in the news a lot lately. And FPGA-based prototyping approach is pretty widely used. On a panel session at DAC 2021, Amir Salek, Head of Silicon for Cloud and Infrastructure at Google had the following to say. Prototyping FPGA is a tremendous platform for testing and validation. We are doubling down on emulation and prototyping every year to make sure that we close the gap between the hardware and software.
With unrelenting time to market pressures, companies deploy different tactics to compress their product development cycles. As a product launch requires having both the hardware and software ready at the same time, companies seek a head start on software development. Rather than waiting for their chips to be ready in final form, they leverage FPGA prototyping to develop and test their software. And hardware/software co-verification is an important aspect of product development, for integration of software with hardware well before final chips and boards become available.
While FPGA with its flexibility lends itself nicely for implementing prototypes of very complex, high-performance SoCs, they do come with some drawbacks. One of those drawbacks being that they trail in terms of supporting the most advanced connectivity interface speeds. System interfaces such as PCIe, DDR, HBM, etc., have been evolving rapidly to support faster and faster data transfer speeds. While mainstream FPGAs can fit large SoCs, they lag when it comes to supporting the most advanced interface speeds. More advanced FPGAs can support the latest PCIe speeds, but they cannot yet fit large SoCs.
S2C solved the above dilemma through a partnered solution with Avery Design Systems. Avery’s PCIe and memory speed adapters can be synthesized into S2C’s FPGA prototyping platforms to support up to PCIe 6.0, HBM3 and LPDDR5 protocol interfaces. You can learn more about this S2C-Avery partnered solution from a recent SemiWiki post. That post provided just an overview of S2C’s product offerings, as its primary focus was on Avery’s speed adapters. The speed adapters cannot do their part without a complete FPGA prototyping platform.
As a global leader of FPGA prototyping solutions, S2C has been delivering rapid prototyping solutions since 2003. S2C offers many different product lines leveraging both Intel and Xilinx FPGAs. Their solutions can fit a wide range of design sizes and support advanced interface speeds as well. Here is a short video clip about their Prodigy Logic Matrix LX2 prototyping solution. The following is a summary of their range of platforms and accompanying productivity tools.
S2C’s Prototyping Platforms
S2C’s offerings cover the prototyping needs of the spectrum of high-end, mainstream and cost-sensitive market driven designs.
Prodigy Logic Matrix
The Prodigy Logic Matrix is built to address the highly complex, high-performance, large SoCs used in applications such data center, AI/ML, 5G and autonomous driving. It is a high-density FPGA prototyping platform that is optimized for space and connectivity. It is designed for multi-system expansion to support billions of ASIC gate capacity.
Prodigy Logic System
The Prodigy Logic System series is a compact, sleek, all-in-one design that offers maximum flexibility, durability and portability. Both Xilinx-FPGA-based and Intel-FPGA-based versions of these systems are available to match customers’ preferences.
Prodigy Logic Module
The Prodigy Logic Module series is specially designed with low profile and affordability in mind. The Prodigy Logic Modules are based on Xilinx’s KU and K7 FPGA devices.
S2C’s Productivity Tools
S2C’s prototyping solutions are accompanied by a suite of productivity tools that eases the setup, development, test and validation process.
PlayerPro
Prodigy PlayerPro is a tool that offers an integrated GUI environment and Tcl interface and works with their prototyping platforms. It makes it easy to take an existing design, compile it, partition into multi-FPGAs, and generate the individual bit files. It also runs remote system management and provides setup for multi-FPGA debugging.
ProtoBridge
Traditionally, realizing the full benefits of a prototyping platform meant investing in additional hardware and other resources. With ProtoBridge, customers can leverage as much of their existing resources. For example, they can use a PC’s memory to store data rather than providing additional memory on the design under test. Use the prototyping platform with other commercial or in-house verification tools through C-based API over standard AXI. IP blocks connected to the AXI bus can be verified without processor cores or peripheral blocks. For all the details, refer to Prodigy ProtoBridge page.
Neuro
Neuro enables multi-dimensional management of resources over multiple projects and users. It helps optimize resource utilization by enabling users to quickly access FPGA compute power and CPU clusters wherever they may reside. This allows S2C’s customers to leverage their in-house resources as well as cloud-based resources.
Prototype Ready IP
S2C provides a large library of off-the-shelf interfaces and accessories for FPGA-prototyping. All interfaces and accessories work with the Prodigy Logic Modules to further speed up and simplify the system prototyping process. Accessory modules are supplied as daughter boards that plug into the Prodigy Logic Module, providing pre-tested interfaces and reference design flows for easy bring-up.
S2C also provides professional services to customize interface and accessory modules to meet their customers’ application needs.
Multi-Debug Module
The S2C Prodigy Multi-Debug Module is an innovative debug solution for FPGA prototyping. It has the ability to run deep-trace debugging simultaneously on multiple FPGAs. It can trace up-to 32K signals per FPGA in eight groups of 4K signals each without re-compiling. It can store 8GB of waveform data without consuming user memories.
Availability
S2C FPGA prototyping products are in stock now and customers can get their hands on them with a short lead time. While the industry is facing supply chain constraints, this is refreshing to hear. For more information, contact S2C.