SILVACO 073125 Webinar 800x100

The State of The Foundry Market Insights from the Q2-24 Results

The State of The Foundry Market Insights from the Q2-24 Results
by admin on 08-20-2024 at 10:00 am

TSMC Leads but Challengers Follow 2024

If you work in the Semiconductor or related industry, you know that industry cycles can profoundly impact your business. It is crucial for strategic development to invest at the appropriate time and to rope the sails when necessary.

As a semiconductor investor, you’re accustomed to the ebb and flow of industry cycles. It’s a reality that even the most stable long-term growth stocks must adapt to. But with the right understanding and preparation, you can predict and make informed investment decisions during these cycles.

My work aims to extract insights from the analysis of public data, insights that can be used to predict where the Semiconductor business is heading. While Semiconductor companies are different, these are the waters all semiconductor companies are navigating. Predicting a company’s business is different from predicting the stock price. The stock market has its logic, but eventually, it will align with the company’s underlying business.

One of the pivotal areas of the Semiconductor industry is the foundry market, which is dominated by TSMC. Born out of frustration with the American ability to compete effectively with the Japanese, the Taiwanese giant is unrivalled from an advanced logic perspective. TSMC always deserves a particular analysis in my research, but researching the entire foundry industry for insights is often valuable.

The development of the Foundry market impacts many companies. A survey from last week confirmed this:

Q2-24 Status of the foundry market.

It was another growth quarter for the foundry companies. Collectively, the industry grew by 10.2% QoQ and 19.6% YoY to $33.5B, which is still some distance from the last peak of $35.4B

TSMC’s dominance is undeniable, and as the current market situation is driven by AI’s need for leading-edge GPUs, it is no surprise that TSMC gained additional market share in Q2-24 although only marginal. The rest led by the Chinese foundries are matching the pace.

After some wobbling quarters, TSMC is now consistently increasing market share. TSMC’s market share has passed 62%, up from the latest low of 52% 10 quarters ago. This is likely to continue for the next few quarters as Intel and Samsung struggle to challenge TSMC’s leading-edge leadership.

The operating profit took a significant jump upwards and is completely dominated by TSMC. The AI leading edge boom is likely to benefit TSMC even more and it is obvious why TSMC is content even though Nvidia captures most of the value.

The increasing operating profit could indicate that we are entering shortage territory again soon, as it is only a couple of billion short of the record.

However, the collective inventory position of the Foundries shows that, in all likelihood, the industry is in balance and is not near the capacity maximum. A deeper dive into the numbers can uncover more insights

The deeper insights

The revenue from a technology perspective is moving at a rapid pace. More than half (SMIC excluded from this analysis) of the top 5 foundry revenue is at 7nm-3nm. That is up from just over a third, 2.5 years ago.

While not a problem yet, we are not far from a situation where it will become challenging to get mature technologies from Western companies.

The wafer fab capacity has increased significantly over the last cycle. For top 6 the combined capacity has grown by 11% CAGR, with SMIC leading the pack. With CapEx investments higher than revenue, the Chinese foundry was able to grow capacity by 29% CAGR since beginning of 2022.

The utilisation of capacity is on the rise for all major foundries, first and foremost the Chinese operations, while TSMCs have a more modest increase at the 80% range. In all likelihood TSMC have quite different loading of the different factories according to technology with pressure on 3nm.

Brick owners of the Semiconductor Industry

It is no secret that semiconductor manufacturing is expensive, which is why most semiconductor companies have chosen the fabless model, leaving the investments to the foundries.

While capacity levels give a good idea of the online capacity at a given time, they do not indicate future capacity. It is valuable to look at the companies’ balance sheets and the financial value of the manufacturing assets. Property, Plant, and Equipment are almost all manufacturing assets for semiconductor companies.

It is worth noting that manufacturing assets are allocated to the country of incorporation, so all of Intel’s PPE will register as US. Of course, reality is more complex so this cannot be used to evaluate the impact of the US chips act.

The PPE view shows that the financial value of the manufacturing assets is closing in on $550B. This is not only ready facturing but also land and construction in progress.

We divide the PPE into three categories to get a feel for the strength of the different manufacturing models:

IDM – Integrated Device Manufacturers manufacturing their own Chips

Foundry – Manufacturing for other IDMs and fabless companies

Mixed Manufacturing – Speciality fab owners that buy advanced logic from Foundries

The chart shows healthy growth, indicating more capacity will come online in the near term, but also a decline in growth in the last quarter. This signals that the investments from the last peak are coming to an end, and PPE growth will be slower.

The CapEx spend can be analysed to gain insight into the longer term future capacity.

It may seem a little bit counterintuitive, given all of the noise about the Chips Act, but CapEx investments are actually declining. It could be easy to interpret this as the Chips Act not working. However, a more likely answer is that the Chips Act changed the investment strategy of most of the large manufacturers to align with it, and investments will accelerate later.

The investment levels in each of the manufacturing types are well above the replacement capex, which represents the capex needed to maintain the same level of capacity.

Conclusion

It was a good quarter for the Foundry companies, particularly TSMC and the Chinese foundries. TSMC’s strong profitability shows that the company does not have to make concessions to keep and win business and is still miles ahead of Samsung.

While TSMC is still far from full capacity, the Chinese foundries are getting closer. As they have lost a significant proportion of their Western business, this is a sign that the Chinese electronics manufacturers are increasing their purchases.

The short-term capacity increase is likely to tail off, as indicated in the PPE development for both IDMs and foundries. This is a result of the last investment peak and the following CapEx pause.

The large gap between current CapEx and replacement CapEx will benefit the longer-term capacity. This has been somewhat delayed but will accelerate once TSMC starts filling its new factories. There will be sufficient capacity, but maybe not the right kind.

The dominance of TSMC will continue, but the Chinese Foundries are punching above their weight and will soon own a significant part of the mature technologies. TSMC will only make limited investments into these technologies and Western fabless companies will have to find a way back to the Chinese foundries without alienating the US government.

Pledge your support for this content

Also Read:

A Post-AI-ROI-Panic Overview of the Data Center Processing Market

TSMC’s Business Update and Launch of a New Strategy

Has ASML Reached the Great Wall of China

Will Semiconductor earnings live up to the Investor hype?


Weebit Nano is at the Epicenter of the ReRAM Revolution

Weebit Nano is at the Epicenter of the ReRAM Revolution
by Mike Gianfagna on 08-20-2024 at 6:00 am

Weebit Nano is at the Epicenter of the ReRAM Revolution

It’s well known that flash is the embedded non-volatile memory (NVM) incumbent technology. As with many technologies, flash is bumping into limits such as power consumption, speed, endurance and cost. It is also not scalable below 28nm. This presents problems for applications such as AI inference engines that require embedded NVM, typically on a single SoC below 28nm. Resistive random-access memory, referred to as ReRAM or RRAM, is emerging as the preferred alternative to address these shortcomings. Weebit Nano is paving the way for this change. Dan Nenni provides a good overview of the company’s ReRAM technology in this DAC report.  The movement is gaining speed. In this post, I’ll review how things are coming together and how Weebit Nano is at the epicenter of the ReRAM revolution.

The Times Are Changing

At the recent TSMC Technology Symposium, there was a lot of discussion about ReRAM.  Indeed, the foundry market leader (who refers to it as RRAM) talked up the technology in its presentation on embedded NVM. The company shared that RRAM, non-volatile memory well formed between backend metal layers, is an excellent Flash replacement with good scalability.

According to the TSMC website, TSMC continues to explore novel RRAM material stacks and their density-driven integration, along with variability-aware circuit design and programing constructs to realize high-density embedded RRAM-based solution options for AIoT applications.

Tech Insights recently reported that Nordic Semi’s new Bluetooth 5.4 SoC has 12 Mb embedded ReRAM on board. The piece went on to say that with multiple resistive states, which can correspond to multiple memory states, ReRAM is a leading contender for machine learning designs. Nordic’s chip is fabricated in TSMC 22 nm ultra-low leakage (22ULL) with an embedded resistive random-access memory (eReRAM) process.

According to the Yole Group by 2028, the total embedded emerging non-volatile memory  market is expected to be worth ~$2.7 billion. Yole cited the first microcontroller product for automotive applications employing embedded RRAM from Infineon as an example of RRAM momentum. The Infineon AURIX TC4 MCU includes 20 MB of non-volatile resistive memory and is manufactured by TSMC in a 28nm process.

Another market research firm, Objective Analysis in its EMERGING MEMORIES BRANCH OUT report, states that over time, the NOR embedded in most SoCs will be almost entirely replaced by either MRAM, ReRAM, FRAM, or PCM.

This momentum isn’t limited to TSMC. On July 31 of this year, it was announced that Weebit Nano and DB HiTek tape-out ReRAM module in DB HiTek’s 130nm BCD process. It was reported that the demo chips will be used for testing and qualification ahead of customer production and demonstrate the performance and robustness of Weebit Nano’s ReRAM technology. And there’s more. Read on…

More From a Recent Memory Conference

Amir Regev

Embedded non-volatile memory technology isn’t the only thing experiencing change. Conference names are evolving as well. The Design Automation Conference is now DAC: The Chips to Systems Conference.  The Flash Memory Summit is now FMS: the Future of Memory and Storage. At that conference, which was held in Santa Clara from August 6-8, 2024, Weebit Nano’s VP of Quality and Reliability Amir Regev gave a presentation on emerging memories. During that presentation, Amir presented more evidence of the ReRAM revolution.

Amir presented test results for Weebit’s ReRAM technology implemented on GlobalFoundries 22FDX wafers. This is significant as this is the first time any public data has been shared about NVM performance at nodes such as 22nm.You can read the press release announcing this milestone here. In that release, it was reported that, Mr. Regev will also highlight the performance of Weebit ReRAM on GlobalFoundries 22FDX® wafers including endurance and retention data – the first such ReRAM results.

Here are some details that Amir presented:

  • Earlier this year Weebit received GF 22FDX wafers incorporating our ReRAM module prototype
    • 8Mb, 128-bit wide, targeting 10K cycles and 10yr retention at 105°C (automotive to follow)
    • Characterization and qualification activities are ongoing
  • Pre-qualification results show:
    • Weebit’s ReRAM stack is stable at 105°C cycling endurance up to 10K cycling
    • Very good data retention pre- and post-cycling is maintained for a long time at high temperatures (150°C), as shown in the figure below.
Hi Temp Cycling Results

Amir also provided a broad summary of Weebit’s qualification work:

  • Qualified modules at 85°C and 125°C
    • Temperatures specified for industrial and automotive grade 1 ICs
    • Qualified for endurance and 10yr retention per JEDEC industry standards
  • AEC-Q100 qualification (150°C and 100K cycles) in progress
    • Good results achieved, collecting statistical data for full qualification
  • Technology demonstrated on multiple process nodes
    • From 130nm to 22nm, Al / Cu, 200mm / 300mm
    • Successfully simulated on FinFET nodes

Amir also described the work underway to qualify Weebit’s ReRAM under extended automotive conditions. He mentioned the excellent results so far on temperature cycling provide a strong foundation for automotive qualification.

Weebit Nano’s ReRAM is finding application in a wide range of processes, foundries and applications. Beyond those mentioned so far, the company also recently announced work with Efabless on SkyWater’s 130nm CMOS (S130) process. This work enables fast and easy prototyping of intelligent devices using Weebit’s technology. Weebit Nano is creating a wide footprint in the market.

To Learn More

ReRAM technology is poised to change the game for many applications.  If your next project includes embedded non-volatile memory, you should see how Weebit Nano can help. You can learn more about the company’s technology here.  If you want to chat with the Weebit team, you can start here.  Weebit Nano is at the epicenter of the ReRAM revolution, join in.


What are Cloud Flight Plans? Cost-effective use of cloud resources for leading-edge semiconductor design

What are Cloud Flight Plans? Cost-effective use of cloud resources for leading-edge semiconductor design
by Christopher Clee on 08-19-2024 at 10:00 am

fig1 vpc

Embracing cloud computing is highly attractive for users of electronic design automation (EDA) tools and flows because of the productivity gains and time to market advantages that it can offer. For Siemens EDA customers engaged in designing large, cutting-edge chips at advanced nanometer scales, running Calibre® design stage and signoff verification in the cloud proves advantageous, as evidenced by the benchmark results discussed in this article. Calibre flows not only facilitate swift design iterations with modest compute resources but also consistently improve with each release. Cloud deployment offers a dual benefit: design teams avoid waiting for local resources and gain the flexibility to scale up during peak demand and leverage Calibre’s scalability for increased design throughput.

But to be cost-effective, cloud resources and infrastructure must be tailored to meet the individual and diverse demands of the many tools that constitute the semiconductor design flow. So what is the optimal configuration for running Calibre applications in the cloud? Which of the dozens of classes of virtual machines are best for running Calibre applications? How do cloud users know they are spending their money wisely and getting the best results? We set out to answer all these questions with a collaboration between Amazon Web Services (AWS), Amazon Annapurna Labs (Annapurna) and Siemens EDA to evaluate results from a series of Calibre flow benchmarks run inside Annapurna’s production virtual private cloud (VPC), which is hosted on AWS. After this evaluation, we developed a set of best known methods (BKMs) based on our experiences and the results.

Environment setup

The cloud experience works best when it is configured to be seamless from an end-user’s perspective. The setup that is probably most familiar to semiconductor designers in their on-premises systems is one where user has an exclusive small head node assigned that is used to submit all their jobs to other machines using some kind of queue manager. The head node is also useful for housekeeping purposes, like editing files, moving data, capturing metrics, etc.

The Calibre nmDRC benchmarks detailed in this paper took advantage of the Calibre MTFlex distributed processing mode running on a primary machine with a series of attached remote machines. In these cases, we used the same machine type for both the remote hosts and the primary. Other tests simply used multithreading on a single machine. A virtual private cloud setup is illustrated in figure 1.

Figure 1: VPN access from a VNC client to a dedicated head node, and then to primary and remote machines inside the cloud environment

Calibre nmDRC benchmark results

Figure 2 shows results for Calibre nmDRC runtime and peak remote memory usage for an Annapurna 7nm design when using an increasing number of remote cores. Runtime is shown in hours, and peak remote memory usage in GB. All runs used the Calibre MTflex distributed processing mode and a homogeneous cluster of machine types for the primary and remote nodes (AWS r6i.32xlarge instances). The horizontal axis shows the number of remote cores, which were doubled with each subsequent run. Each run used a consistent 64-core primary machine.

Figure 2. Calibre nmDRC runtime and peak remote memory with an increasing number of remote cores for the 7nm Annapurna design

The dark blue line is the baseline run using the same Calibre nmDRC version that Annapurna originally used in production on these designs with stock rules from their foundry partner. The light green line shows results using a more recent Calibre nmDRC version with optimized rules and instead of reading the data in from an OASIS file, the design data was restored from a previously saved reusable hierarchical database (RHDB) which in this case saved about 20 minutes per run. The light blue dotted line shows the percentage time saving between these two sets of runs. The purple line is the Calibre nmDRC Recon run, which automatically selects a subset of the foundry rules to run to find and resolve early design iteration systematic errors. Siemens EDA always recommends that customers run the Calibre nmDRC Recon tool on dirty designs before committing to a full Calibre run. This helps find the gross errors in the design very quickly, so they can be eliminated with short cycle times.

Determining how many remote cores to use in the cloud is dependent on the size and complexity of the design, the process technology, and the complexity of the rule deck. The optimal spot is found around the “knee” in the curve on these charts (for the design in figure 2, around 500 remote cores). The peak memory plots show that there was plenty of headroom for physical memory – each remote had 1TB RAM. The cost of these runs is typically in the range of a couple hundred dollars. Calibre customers typically use 500 remote cores as a minimum for full-chip Calibre nmDRC runs at advanced nodes. The data supports the Calibre value proposition of maintaining fast turnaround based on a modest amount of compute resource. However, the data also shows that scalability continues to even greater numbers of cores, giving Calibre users headroom to further compress cycle time if needed.

Figure 3 shows similar results for a Calibre nmDRC run on a 5nm Annapurna design. Here again, the optimal spot is around 500 remote cores, with fewer than 5 hours of runtime.

Figure 3. Calibre nmDRC runtime and peak remote memory with an increasing number of remote cores for a 5nm Annapurna design

Here again, the data demonstrates that the Calibre nmDRC tool is very resource-efficient, so it is not necessary to use thousands of remote CPUs to get reasonable design cycle times. Design teams can readily perform multiple turns per day using a modest number of cores, with a correspondingly modest associated cost. If it is helpful or necessary to squeeze in one or two more design turns per day, they can increase the number to 1,000 remote cores. The advantage of operating in the cloud is that more machines are always available, and it is highly likely that they will spin up very quickly.

Calibre interactive run management results

Both Annapurna designs were opened in the Calibre DESIGNrev interface, and the Calibre Interactive invocation GUI was used to initiate Calibre nmDRC and Calibre nmLVS runs. In addition, the Calibre Interactive integration with the Altair Accelerator (NC) queue manager was assessed, and Calibre RVE invocation and cross-probing to the design in the Calibre DESIGNrev interface was evaluated. All Calibre Interactive operations were fast and responsive.

Best known methods for EDA cloud computing

Following the benchmarks, we evaluated the results to encapsulate learnings and observations into suggested BKMs for running Calibre applications in the cloud. We generated general BKMs for optimizing spend in the cloud, improving cloud performance, and optimizing the general experience when using cloud-based resources, as well a specific BKMs for running Calibre flows in the cloud. These BKMs are encapsulated as Cloud Flight Plans, which are instructions for flow-specific optimizations that will allow Siemens EDA customers to leverage the availability, scalability and flexibility of cloud compute in a way that is very cost effective. Many of these resources are available through cloud landing pages on our Siemens EDA website. Our Cloud Reference Environment and other flow-specific assets and resources are available to our customers through our SupportCenter resource site.

Download our newest technical paper that describes BKMs for Amazon Web Services (AWS), using Annapurna Labs’ experience as a benchmark. Running Calibre solutions in the cloud: Best known methods.

Conclusion

In the realm of electronic design automation (EDA), cloud computing offers a compelling solution to the problem of constantly burgeoning design size and complexity. Design teams, armed with cloud-tuned machines, can work at an accelerated pace—launching jobs when needed, utilizing resources efficiently, and even running multiple tasks in parallel. Whether it’s accelerating design cycles or optimizing costs, the cloud provides flexibility. To navigate this celestial landscape effectively, understanding and optimizing cloud resources and configurations is key. Siemens EDA, in collaboration with major cloud providers like AWS, has crafted Cloud Flight Plans to guide design mutual customers on their cloud journey. With Cloud Flight Plans as their compass, semiconductor designers can chart a steady course toward efficient, cloud-powered success.

Also Read:

Solido Siemens and the University of Saskatchewan

Three New Circuit Simulators from Siemens EDA

Siemens Provides a Complete 3D IC Solution with Innovator3D IC


AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery
by Robert Maire on 08-19-2024 at 6:00 am

One Trick Pony
  • AMAT reports good but underwhelming quarter
  • China slowing creates revenue & GM headwinds- ICAPs weak
  • AI remains the one and only bright spot in both foundry & memory
  • Cyclical recovery remains slow – Single digit Y/Y growth
OK quarter – still slow growing, revs up only 5% Y/Y

AMAT came in at revenues of $6.78B , up 5% Y/Y with EPS of $2.12. Guidance is for $6.93B+- $400M and EPS in a range of $2.00 to $2.36. Guidance was only slightly better than current estimates.

Given the standard “beat” versus true expected “whisper” numbers we would view the results as somewhat underwhelming and the after hours trading reflected that.

The recovery from the down cycle remains both slow and elusive

With revenues up only 5% year over year we are certainly not experiencing a “V” shaped recovery or a quick bounce back from the long downturn.

We have suggested many times that the recovery would be slow and arduous and the numbers we see underscore that view. As compared to previous downcycles that bounced back very quickly there are more headwinds and fewer tailwinds to drive the recovery.

AI is the “one trick pony” of the semiconductor recovery

The one and only bright spot of a luke warm recovery is clearly AI. Both in leading edge foundry (read that as TSMC as they are the sole makers of NVIDIA chips) and HBM memory which is the only bright spot in the overall lackluster memory market.

Although we are super bullish on all things AI, it will be difficult for semiconductor equipment makers to have a full scale cyclical recovery without other sectors improving as well.

NAND is still in the dumps and DRAM is really primarily HBM. In foundry logic its really just the leading edge and again primarily TSMC given Intel’s recent cut of capex spend

Running on two cylinders of TSMC & HBM , although strong, is not great for acceleration.

China slows significantly which also creates headwinds for gross margins

China is slowing from the mid 40’s percent to the mid thirties percent of business. This obviously creates a revenue challenge but more importantly creates a gross margin challenge as China customers have been paying significantly higher pricing (we pointed this out in our previous note about Chinese “sucker” customers).

This roughly 10% hit to equipment revenues and higher hit on gross margins adds to recovery headwinds.

Is ICAPs and trailing edge permanently slower?

Management also talked about some weakness in the ICAPs, trailing edge business and projections of mid single digit growth rates (not lighting things on fire)

Our view is that the industry went through a number of years of hyper growth in trailing edge equipment sales driven in large part by new and expanding trailing edge fabs in China and Asia to serve this significant market and now we are in an overcapacity situation such that there is clearly not enough need to buy new trailing edge tools.

The days of “unusual” hyper growth in trailing edge are over and will likely return to more modest numbers over the long run.

This trailing edge of the market has been especially good to Applied and without it being strong will be another significant headwind for the companies recovery.

The Stocks

Management demurred from making any projections about 2025 likely with good reason as things may not be quite as strong as some analysts were thinking/hoping.

Although Applied management used the word “strong” many times there were no numbers or projections to back that up so estimates of future growth may be getting toned down a bit.

As we saw with Lam when they reported last month we experienced numbers that were good but not “good enough” and the stock traded off.

Applied was up $10 during the day but gave back $5 in the aftermarket on what we viewed as a less than stellar quarter, guide and lack of long term projections.

In summary, AI is a great but only a singular driver while NAND, China, gross margins, ICAPs, the rest of DRAM, Intel capex all have headwinds.

We would consider taking some money off the table after the recent recovery in the semi sector from a negative overreaction that caused an overall market correction that we have more or less recovered from.

Visit Our Website
About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

LRCX Good but not good enough results, AMAT Epic failure and Slow Steady Recovery

The China Syndrome- The Meltdown Starts- Trump Trounces Taiwan- Chips Clipped

SEMICON West- Jubilant huge crowds- HBM & AI everywhere – CHIPS Act & IMEC


Podcast EP241: A Look at Agile Analog IP with Chris Morrison

Podcast EP241: A Look at Agile Analog IP with Chris Morrison
by Daniel Nenni on 08-16-2024 at 10:00 am

Dan is joined by Chris Morrison, who has over 15 years of experience in delivering innovative analog, digital, power management and audio solutions for International electronics companies, and developing strong relationships with key partners across the semiconductor industry. Currently he is the Director of Product Marketing at Agile Analog, the customizable analog IP company. Previously he held engineering positions, including 10 years at Dialog Semiconductor.

Chris reviews the fist half of the year for Agile Analog with Dan. The conferences the company attended and the product traction achieved are discussed. Agile Analog enjoys strong market interest across a broad range of applications. The company is working with many of the leading foundries and attends those foundry events to connect with current and new customers.

Chris also discusses what’s in store at Agile Analog for the second half of the year. Collaboration with Globalfoundries becomes a key new addition for the second half. From a product perspective, Chris sees significant interest in the company’s reconfigurable data converters and power management IP.

There is also keen interest in analog IP at leading edge nodes, so this will also be a focus for the second half of the year.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Anders Storm of Sivers Semiconductors
by Daniel Nenni on 08-16-2024 at 6:00 am

Anders Storm Sivers

Anders Storm is the CEO of Sivers Semiconductors. Under his almost decade-long leadership, the company has experienced significant growth, solidifying its position as a key player in the global semiconductor industry. With expertise in wifi communications, 5G, and photonics, he drives the company’s corporate strategy, product innovation, organisational excellence, and shareholder engagement.

Tell us about your company
Sivers is at the forefront of SATCOM, 5G, 6G, Photonics, and Silicon Photonics, pushing the boundaries of global communications and sensor technology. We have two business units – Photonics and Wireless – which provide advanced chips and modules essential for best in class gigabit wireless and optical networks. We serve a wide range of industries, from data and telecommunications, aerospace to defense, meeting the growing need for faster computation and better AI performance. By switching from electric to optical connections, we’re also helping to create a more sustainable future.

What problems are you solving?
We address several critical issues across various industries. Sivers enhances global connectivity and communication by advancing SATCOM, 5G, and 6G technologies, ensuring faster and more reliable networks. Our chips and modules for gigabit wireless and optical networks tackle the need for high-speed data transmission, crucial for applications such as video streaming and cloud computing. Additionally, Sivers supports the growing demand for high-performance computing and AI applications, enabling faster data processing and efficient machine learning models. Serving industries from telecommunications to aerospace, Sivers facilitates innovation and improvement in products and services, while helping companies future-proof their infrastructure and scale with technological advancements.

What application areas are your strongest?
Our newest Radio Frequency Module is a high-performance, high-power, wide bandwidth component designed for gigabit communication for example Fixed Wireless Access applications. It enables the creation of internet and data connections without the need for physical cables, making it ideal for providing wireless broadband to homes and businesses.

What keeps your customers up at night?
We find that our customers are most anxious about keeping up with the rapid pace of technological advancements in communications and sensor technologies, fearing that their current infrastructure might become obsolete and not sustainable long term. Similarly, the need to maintain a competitive edge in their respective industries by continuously innovating and integrating the latest technologies is a source of stress for our customers. In addition, the growing demand for higher data transmission speeds and computational power to support emerging applications like AI and machine learning may also weigh heavily on their minds.

What does the competitive landscape look like and how do you differentiate?
The growing demand for high-speed, reliable, and sustainable communication solutions across various industries puts pressure on companies to continuously innovate. Strategic partnerships and collaborations with tech companies, research institutions, and industry organizations are essential for staying competitive and expanding market reach in this rapidly evolving sector. Of course, the integration of AI into communication networks is increasingly crucial, as competitors use AI to optimize performance and enhance analytics.

What new features/technology are you working on?
Under a new contract with Blu Wireless, we are designing and developing advanced 5G long range antenna modules that operate within the 57-71 GHz license exempt band, providing high-speed broadband communication links for track-to-train applications. This is a new and exciting area for us as we seek to transform the way passengers and operators experience connectivity on the move. Another area is for AI clusters to connect GPUs with 16 terabit per second using photons rather than electrons, reducing power consumption by up to 90 percent.

How do customers normally engage with your company?
For customers requiring tailored solutions, Sivers engages in custom development contracts. These agreements outline the specific requirements and specifications for the custom product or solution, including performance metrics, timelines, and milestones. Such contracts often involve close collaboration between Sivers’ engineering teams and the customer’s technical staff, after that we deliver the chips or modules in volumes entering into long-term supply agreements to ensure a steady and reliable source of integrated chips, modules, and other critical components. It’s in this phase we are now transitioning into where we will go from 39% in second quarter 2024 in product sales, to over 80% in 2026. This is where we really will leverage our custom development contracts into the next phase of high growth.

Also Read:

CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Yogish Kode of Glide Systems

CEO Interview: Pim Donkers of ARMA Instruments


Emerging Memories Overview

Emerging Memories Overview
by Daniel Nenni on 08-15-2024 at 10:00 am

ReRAM History 2024

This year’s Future of Memory and Storage Conference (formerly the Flash Memory Summit) was again very well attended. The Santa Clara Convention Center is definitely the place to be for a Silicon Valley Conference.

This post is about the Emerging Memories session organized by Dave Eggleston. We will be covering other sessions but this was my #1. Having been in the embedded memory space for a good part of my career I know there is a serious Moore’s Law scaling problem and it will be interesting to see what new technology comes out on top.

Here is the session abstract:

In this session, we discuss emerging memories. Ultra-High Speed Photonic NAND FLASH technology revolutionizes memory operations by achieving ultra-high speeds with lower voltages and power consumption. This technology combines vertical NAND FLASH transistors with lasers/LEDs and photon sensors for efficient READ operations. ReRAM is now mainstream in applications such as automotive and edge AI due to its low power, scalability, and resilience to environmental conditions. We will explore the technology enhancements needed for wider adoption and the latest developments in advanced processes. ULTRARAM boasts exceptional properties like energy efficiency and extreme temperature tolerance, making it ideal for space and high-performance computing applications. We will highlight progress in fabrication processes and potential applications. Finally, we will discuss life beyond flash and the future of memory technologies like MRAM, ReRAM, PCM, and FRAM. Analysts will explore the impact on computer architectures, AI, and the memory market in the next 20 years, emphasizing the inevitability of transitioning to emerging memory types.

For me ReRAM is a top contender. Amir Regev, VP of Quality and Reliability at Weebit Nano presented:  ReRAM: Emerging Memory Goes Mainstream which was very interesting. Here is the abstract. Weebit also has a lot of information and instructional videos on their website which is quite good.

ReRAM today is being integrated as an embedded non-volatile memory (NVM) in a growing range of processes from 130nm down to 22nm and below for a range of applications: automotive, edge AI, MCUs, PMICs and others. It is low-power, low-cost, byte-addressable, scales to advanced nodes, and is highly resilient to a range of environmental conditions including extreme temperatures, ionizing radiation and electromagnetic fields. In this session, Weebit will discuss what technology enhancements are needed to proliferate ReRAM even further into applications with extended requirements. We will discuss the latest technical and commercial developments including data in advanced processes.

For those of you who don’t know Resistive Random Access Memory (RRAM or ReRAM) is a type of non-volatile memory that stores data by changing the resistance of a material. Unlike traditional memory technologies like DRAM or flash memory, which store data using charge. Amir reminded us that RERAM is not new and I do remember past RRAM discussions at some of the top semiconductor companies and certainly at TSMC.

Jim Handy also presented. Jim is one of the most respected embedded memory analysts and a blogger like myself. You can find him at https://thememoryguy.com/.

Here is Jim’s presentation abstract:

Flash memory has scaled beyond what was thought possible 20 years ago. Can this continue, or will an emerging memory technology like MRAM, ReRAM, PCM, or FRAM move in to replace it? Are there other memory technologies threatened with similar fates? What will the memory market look like in another 20 years? This talk will explain emerging memory technologies, the applications that have already adopted them in the marketplace, their impact on computer architectures and AI, the outlook for important near-term changes, and how economics dictate success or failure. Noted Analyst Jim Handy, and IEEE President Tom Coughlin will present the findings of their latest report as they discuss where emerging memories complement CXL, Chiplets, Processing In Memory, Endpoint AI, and wearables, and they explain the inevitability of a conversion from established technologies to new memory types.

I invited Jim to be a guest on our Semiconductor Insiders podcast to talk to him in more detail so stay tuned.

Bottom line: If you are currently researching embedded memory my advice is to look to the foundries. Foundry silicon reports do not lie.

Also Read:

Weebit Nano at the 2024 Design Automation Conference

Weebit Nano Brings ReRAM Benefits to the Automotive Market

ReRAM Integration in BCD Process Revolutionizes Power Management Semiconductor Design


The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation
by Kalar Rajendiran on 08-15-2024 at 6:00 am

Comparative Analysis of Chiplet Interconnect Standards (Physical Layer)

The semiconductor industry is experiencing a significant transformation with the advent of chiplet design, a modular approach that breaks down complex chips into smaller, functional blocks called chiplets. A chiplet-based design approach offers numerous advantages, such as improved performance, reduced development costs, and faster time-to-market. This approach improves yield by isolating defects to individual modules, optimizes transistor costs by allowing different manufacturing nodes for different components, and leverages advanced packaging technologies for enhanced performance. The modularity of chiplets supports scalable, customizable designs that accelerate time-to-market and enable targeted optimization for performance, power, and cost.

However, one of the most substantial barriers to widespread adoption has been the lack of standardization in how these chiplets communicate with each other. The Universal Chiplet Interconnect Express (UCIe) standard is poised to change that, making chiplet design more accessible and opening up new opportunities for innovation across the industry. Mayank Bhatnagar, a Product Marketing Director at Cadence gave a talk on this subject at the FMS 2024 Conference in early August.

Standardization and Interoperability

Before standardized chiplet interfaces, custom designs for each chiplet were needed, leading to higher costs, longer development times, and limited interoperability. Companies had to develop proprietary interfaces for their chiplets, making it difficult to integrate components from different suppliers. This lack of interoperability increased development costs and limited the pool of available chiplets.

The adoption of standards simplifies this process, allowing designers to focus on core innovations while using pre-validated interfaces for communication. This reduces custom design efforts, accelerates development, and ensures seamless integration. Companies can now leverage proven chiplets, cutting costs and improving quality. Overall, standardization streamlines design, reduces resource use, and speeds up time-to-market. Over the recent past, a number of chiplet-to-chiplet interface standards have been developed.

A comparative analysis of these various standards indicates that, in terms of bandwidth efficiency, energy usage efficiency and latency, UCIe excels.

The Role of UCIe in Chiplet Design

UCIe, or Universal Chiplet Interconnect Express, is an open industry standard that defines a high-bandwidth, low-latency interconnect protocol for connecting chiplets. UCIe provides a common interface for chiplets to communicate, much like how USB standardized peripheral connections in the PC industry.

With UCIe, companies can mix and match chiplets from various vendors, fostering a more competitive market and driving innovation. It lets designers focus on core innovations and highly customized cores and leverage standardized interfaces for the periphery. By surrounding highly customized cores with standard periphery, designers can maximize their market reach and efficiency.

Enabling Specialized and Customized Solutions

One of the most exciting possibilities enabled by UCIe is the potential for highly specialized and customized solutions. In the past, companies had to rely on expensive monolithic SoCs or resort to general-purpose SoCs that might not be perfectly suited for their specific application. With chiplets and UCIe, companies can build custom systems tailored to their exact needs, selecting the best components from a variety of suppliers. For example, a company developing an AI accelerator could choose a high-performance CPU chiplet from one vendor, a specialized neural processing unit (NPU) from another, and memory from a third. UCIe ensures that these components can communicate effectively, allowing the company to create a highly optimized solution without the need for an expensive monolithic custom SoC.

Custom Silicon for AI Applications

The demand for custom silicon is rapidly increasing, driven by the need to optimize hardware for specific AI applications such as training, inferencing, data mining, and graph analytics. AI training requires high-performance, parallel processing capabilities to manage large datasets and complex models, while AI inferencing demands low-latency, high-throughput processing for real-time predictions and decisions. Data mining benefits from custom silicon tailored for specific data processing and extraction tasks, and graph analytics requires chips designed to handle the complexity of graph processing and large-scale parallelism. A chiplet-based approach leveraging UCIe offers significant advantages for these applications in terms of performance, power efficiency, and scalability.

Fostering Innovation and Collaboration

As an open industry-standard, UCIe not only reduces barriers to entry but also encourages collaboration and innovation within the semiconductor industry. By establishing a common platform for chiplet communication, UCIe enables companies to focus on their core competencies, whether that’s developing cutting-edge processors, advanced memory technologies, or specialized accelerators. This collaborative environment can lead to the development of new, innovative products that might not have been possible within the constraints of traditional SoC design. As more companies adopt UCIe and contribute to the ecosystem, the variety and quality of available chiplets will continue to grow, further driving innovation.

Summary

UCIe represents a significant step forward in the evolution of chiplet design, lowering the barriers to entry for companies of all sizes. By standardizing the communication between chiplets, UCIe makes it easier for companies to develop custom, high-performance systems without the need for costly and complex SoC designs. As a result, UCIe is expected to democratize the semiconductor industry, fostering greater innovation and competition while enabling a new wave of specialized and customized solutions. The future of chip design is modular, and with UCIe, that future is more accessible than ever. The growing demand for custom silicon for AI applications will drive further advancements and opportunities around UCIe technology.

For more details, visit Cadence’s UCIe product page.

Also Read:

The Future of Logic Equivalence Checking

Theorem Proving for Multipliers. Innovation in Verification

Empowering AI, Hyperscale and Data Center Connectivity with PAM4 SerDes Technology


CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Zeev Collin of Semitech Semiconductor
by Daniel Nenni on 08-14-2024 at 10:00 am

Zeev 29

Zeev Collin is a seasoned technology executive and serial entrepreneur with over 25 years of experience in executive management across international semiconductor companies and startups. Prior to co-founding Semitech, Zeev co-founded and successfully exited two ventures focused on vehicle and trailer tracking devices. Earlier in his career, he played a key role in developing seminal soft modem technology, which was acquired by Conexant Systems, where he subsequently held VP positions in product development and business management. Today, Zeev continues to leverage his expertise as a board member and advisor for various startups. He holds a BSc in Computer Engineering and an MSc in Computer Science from the Technion – Israel Institute of Technology.

Tell us about your company?

Semitech Semiconductor is a dynamic fabless semiconductor company specializing in the development of cutting-edge communication technology. Our flagship products provide reliable, cost-effective communication solutions for a wide range of machine-to-machine applications (Internet of Things) in industrial and automotive environments. We focus on narrowband powerline communication (PLC) and wireless mesh technologies, enabling existing infrastructures to become “smart” with seamless communication without the need for additional wiring.

We are committed to a successful long-tail business model around niche applications by addressing a wide range of IoT communication needs with a versatile, multi-modal solution for both power lines and wireless mesh networks. Our motto is: “Connect Everything, Everywhere!”

What problems are you solving?

Our multi-modal devices offer the most adaptable communication solutions, effectively addressing the diverse needs of the Industrial IoT market for “monitor and control” applications, all while avoiding the cost and complexity of additional wiring.

We tackle several key challenges:
  • Infrastructure limitations: We eliminate the need for installing new communication network wiring in existing buildings or industrial facilities.
  • Reliability: Our solutions ensure robust communication even during wireless network failures by providing dependable connectivity over power lines, even in noisy and electrically challenging environments. We also offer hybrid mesh networks that combine PLC and wireless technologies.
  • Diverse requirements: The Industrial IoT encompasses a wide range of applications and geographies with varying needs. Our flexible, customization-focused approach delivers high-quality solutions, whether they are standard-based or proprietary, tailored to meet specific application and customer requirements.
What application areas are your strongest?

Our primary domain is Industrial IoT, where we have key customer engagements and successfully deployed solutions across various applications, such as:

  • Tractor/Trailer Communication: We are the only solution supporting the PLC4TRUCKS protocol, used in North America for trailer ABS communication. We are collaborating with our customers to expand the use of PLC to other control and communication functions between tractors and trailers.
  • Point-to-Point Communication for Mining and Drilling: Our proprietary SpeedLink protocol is optimized for connecting subterranean sensors to a control center, enabling constant data streaming without the need for repeaters.
  • Smart Lighting: Our solutions allow customers to remotely control lighting systems in locations such as sporting venues, pools, and airfields.
  • Smart Metering: Our combination of wireless and PLC technologies is utilized by one of the largest metering companies worldwide.
What keeps your customers up at night?

Beyond the universal concerns of competitiveness and cost efficiency, our customers are specifically worried about the following aspects:

  • Reliability: Reliable communication is paramount. While data speed is often secondary, having a robust solution that operates effectively in noisy and changing channel conditions is essential. Our products are designed for long-term use and must maintain reliable operation throughout their lifespan.
  • Security: Since many of our target applications relate to critical infrastructure, security is a major concern. Ensuring the safety and integrity of data is crucial for our customers.
  • No New Wires: Retrofitting existing systems with sensors and remote-control capabilities requires communication solutions that do not need dedicated wiring and can utilize existing infrastructure, making this a significant consideration.
  • Automotive Qualification: With semiconductors becoming integral to the automotive industry, there is an increasing expectation that any semiconductor component used must meet stringent automotive-grade qualification requirements.
What does the competitive landscape look like and how do you differentiate?

When it comes to technology choices, our PLC solution competes with various wireless technologies like 5G and LoRa. However, we view these as complementary approaches rather than direct competition. The optimal approach typically depends on environmental conditions and the specific application requirements. Often, the best solution involves a combination of two or more communication technologies to ensure the best coverage and reliability.

In terms of actual competitors, Semitech often competes with large international companies. We differentiate ourselves by offering high-quality solutions that perform better in challenging, noisy channel conditions. More importantly, we focus on niche applications with long lifespans. Instead of pursuing “cookie-cutter” solutions for mass-market applications, we embrace market fragmentation and the diversity of application needs. We provide our customers, even the smaller ones, with boutique-quality support and a level of customization that large companies cannot or will not offer.

What new features/technology are you working on?

We are continually enhancing our existing solution with new features, including:

  • Faster data rates via our SpeedLink protocol
  • PLC-BLE bridging for trailer telematics applications
  • Hybrid mesh networking

Next year, we will introduce the first and only automotive-grade PLC4TRUCKS solution. Additionally, we are developing new technologies, such as GreenPHY for the EV market and a WiSUN/PLC combo solution.

How do customers normally engage with your company?

Due to the nature of our business, we embrace direct interaction with our customers to better understand their requirements and needs. Our engineering team is very adept in providing tailored solutions and we encourage direct open communication between our engineers and our customers. Our website serves as a key source of customer leads, allowing potential clients to find us and contact us directly. Additionally, we employ a network of reps who help channel customers to us and provide first line of support.

We regularly engage in collaborative engineering projects with our customers to develop specialized features or advanced solutions tailored to their specific needs.

Also Read:

CEO Interview: Pim Donkers of ARMA Instruments

CEO Interview: Dr. Babak Taheri of Silvaco

CEO Interview: Orr Danon of Hailo


WEBINAR: Silicon Area Matters!

WEBINAR: Silicon Area Matters!
by Daniel Nenni on 08-14-2024 at 8:00 am

SemiWiki Flex Logix Webinar

When designing IP for system-on-chip (SoC) and application-specific integrated circuit (ASIC) implementations, IP designers strive for perfection. Optimal engineering often yields the smallest die area, thereby reducing both cost and power consumption while maximizing performance.

Similarly, when incorporating embedded FPGA (eFPGA) IP into a SoC, designers prioritize these critical factors. eFPGA IP is inherently scalable, enabling it to be tailored to each customer’s specific requirements. However, the necessary FPGA logic is not only determined by the programmed design, but also by the compiler and FPGA architecture used.

Embedded FPGA provides crucial flexibility, allowing SoCs to adapt to changing standards, protocols, customer requirements and post-quantum cryptography algorithms as well as enables software acceleration and deterministic processing. Flex Logix’s EFLX eFPGA architecture delivers industry-leading performance, power, and area (PPA) metrics. It features a familiar 6-input lookup table (LUT) along with a highly efficient, patented routing switch matrix that sets it apart from competitors. This switch matrix reduces the number of metal stack layers, enabling EFLX to meet the stringent requirements of edge IoT devices.

Recently Flex Logix announced the availability of eXpreso, its powerful 2nd generation EFLX eFPGA compiler and successor to the first-generation compiler, EC1.0. eXpreso, which has been in development for years, is now shipping to alpha customers for evaluation. The new compiler delivers up to 1.5x higher frequency, 2x denser LUT packing and 10x faster compile times for all existing EFLX tile and arrays. Now IC designers can further reduce eFPGA IP implementation to levels never seen before.

REPLAY: Reconfigurability is now achievable with significantly reduced silicon area, thanks to new Flex Logix eFPGA compiler
Abstract:

Many IC architects value the adaptability and reconfigurability of embedded FPGA (eFPGA) technology, but often dismiss it due to the implementation cost. Their focus on smaller die area and power consumption are primary drivers. Flex Logix has addressed this challenge with its new, game-changing eFPGA compiler tool, eXpreso, which can dramatically decrease the required die area impact of adding eFPGA. eXpreso’s innovative routing optimizations and packing ability can cut design implementations in half.

This webinar will provide an opportunity to learn more about Flex Logix’s embedded FPGA IP and problem-solving applications, and to see a live demonstration of eXpreso and how it can significantly reduce the area of embedded FPGA IP.

Presenters:

Jayson Bethurem – VP, Marketing & Business Development
Jayson is responsible for marketing and business development at Flex Logix. Jayson spent six years at Xilinx as Senior Product Line Manager, where he was responsible for about a third of revenues. Before that he spent eight years at Avnet as FAE, showing customers how to use FPGAs to improve their products. Earlier, he worked at start-ups using FPGAs to design products.

Brian Philofsky – Sr Director of Solutions Architecture
Brian is Sr Director of Solutions Architecture supporting customers in their technical evaluation and implementation of Flex Logix Hardware and Software. Brian spent more than 25 years at Xilinx/AMD in various roles including Director of Technical Marketing, Principal Engineer for Power Solutions, and managing applications, design services and support roles. Brian has been awarded 13 US Patents.

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA, DSP/SDR and AI Inference solutions for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable DSP/SDR/AI is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm, 3nm and 18A in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.

Also Read:

Flex Logix at the 2024 Design Automation Conference

Elevating Your SoC for Reconfigurable Computing – EFLX® eFPGA and InferX™ DSP and AI

WEBINAR: Enabling Long Lasting Security for Semiconductors