DAC2025 SemiWiki 800x100

CEO Interview: Pim Donkers of ARMA Instruments

CEO Interview: Pim Donkers of ARMA Instruments
by Daniel Nenni on 08-02-2024 at 6:00 am

Pim Donkers

Pim Donkers is the co-founder and Chief Executive Officer of Switzerland-based ARMA Instruments, a technology company which produces ultra-secure communication devices.

Pim is a technology serial entrepreneur operating internationally with several successful companies in his portfolio such as his recent IT outsourcing company Binarylabs.

His interests are in geopolitics, technology, and psychology. His diverse background allows him to observe things differently and drive unconventional and non-linear solutions to market.

Tell us about your company

In 2017, when we looked at available secure communications technology, we saw products that were technologically incapable of dealing with advanced adversaries, and products challenged by the involvement of Nation State actors. We saw a need for a different kind of technology, and we knew it would take a different kind of company to build it. That’s why we founded ARMA Instruments.

Our company mission is to provide absolute secure communications and data security, based on zero-trust principles, while being transparent in organizational infrastructure and technology. Our corporate tagline is “Trust No One”. This philosophy has allowed us to create the most secure handheld personal communicator available.

ARMA products include an ultra-secure mobile personal communicator with several industry-first features for secure communications. The ARMA G1 MKII for example is classified as a “Dual Use Good” for both military and civilian use applications operating server-less over a cellular network. ARMA developed the entire system from the ground up, from the message application and the ARMA Linux operating system to the hardware electronics boards.

There are no commercial processors or third-party software in our products. Everything is proprietary ARMA technology. Our personal communicator has no ports of any kind. No microphone and no camera. Charging is done wirelessly. This architecture dramatically reduces attack surfaces and renders malware that exploits known OS weaknesses useless.

Digging a bit deeper, our patented Dynamic Identity, working with ARMA’s VirtualSim feature, prevents targeted cyberattacks and anti-personnel attacks by changing the device’s cellular network identity at non-predictable intervals. We say this creates an “automated burner phone”.

Data from communication sessions is stored only on the ARMA device, never in the cloud. The device can sense attempts to access this data either through physical means or electronic disruption such as side-channel attacks. In these cases, the device will execute a self-destruct sequence. These are just a few of the capabilities of the device. There are many other secure features deployed to adhere to the highest security levels in the industry.

What problems are you solving?

Mobile phones, including secure ones, essentially act as personal location beacons on the global cellular network. Eavesdropping on anyone has become remarkably easy. A high-profile example of this is Israel’s NSO Group Pegasus spyware that can be installed on iPhones and Android devices, allowing operators to extract messages, photos and emails, record calls and even secretly activate microphones and cameras.

As mentioned, our device runs a proprietary OS and we have no external ports, microphones or cameras.  Users are anonymous on the network by changing IMSI and IMEI numbers through our virtual SIM environment. Pegasus, and all other spyware of this type presents no threat to us.

Smartphone security weaknesses can create life-threating situations. For example, with forward-deployed engineers in Ukraine, we’ve seen smartphones used as sophisticated homing devices for ground or airborne attacks, such as drones rigged with explosives or passive listening detonators. The decreasing costs and increasing availability of such technology make smartphone-assisted attacks a very real threat. This is why broadcasting your location on the network doesn’t make sense. Our technology, Dynamic Identity, is patented in the US and helps mitigate these risks.

Additionally, any phone call, message, or media sent is typically stored on a server to ensure delivery if the recipient is offline. This data at rest outside user devices is subject to zero oversight, leaving it vulnerable to being stored indefinitely and decrypted in the future. As I mentioned, our server-less protocol ensures data at rest is only on user devices, with phone calls made directly from device to device. And this data is encrypted and protected with our self-destruct capability.

What application areas are your strongest?

Our technology secures communications universally, making its applications vast. We’ve seen significant interest from branches of governments, defense/military organizations, intelligence contractors, industrial markets, nuclear power facilities, emergency services, financial organizations, healthcare, and the high-tech industry to name a few. Interest is strong, and we are growing rapidly across the world.

What keeps your customers up at night?

Our customers are aware that modern technology often exploits their data. They understand the trade-off between convenience and the security of their intellectual legacy, knowing that no bulletproof solutions exist. Distrust and espionage are occurring throughout the world at all levels. This awareness keeps our customers vigilant and concerned. 

For example, government officials and corporate executives are primarily concerned about the security and confidentiality of their sensitive communications, fearing interception or espionage. Meanwhile, security personnel and field agents are often more focused on the evolving landscape of cyber threats and ensuring their data remains unaltered and trustworthy.

Overall, whether it’s about protecting privacy, adhering to legal regulations, or ensuring operational continuity during critical times, ARMA G1 customers share a common need for robust, reliable, and secure communication solutions to mitigate diverse concerns.

What does the competitive landscape look like, and how do you differentiate?

Many competitors focus solely on adding security layers to commercial software, overlooking that secure communication requires more than just software. This is partly because hardware development is unpredictable and time-consuming, making it less attractive to VCs. Those who claim to develop their own hardware often just repurpose existing mobile boards.

Purpose-built phones from companies like Airbus, Sectra, Thales, and Boeing use outdated technology due to the popularity of BYOD and the high costs and time involved in obtaining new certifications for innovations. We differentiate by offering genuinely innovative, purpose-built solutions. 

In addition to new and unique purpose-built hardware, ARMA provides differentiating technology with our Dynamic Identity VirtualSim environment, and server-less Infrastructure designed to comply with Top Secret classification levels.

What new features/technology are you working on?

ARMA will introduce its second generation ARMA G1 MKII in Q3 this year, which is a secure text only device. It will soon be followed by the enhanced G1 with secure voice capability.

There are many other ARMA products currently under development and they will be announced as we bring them to market over the next 12 months.

How do customers normally engage with your company?

At present, the best way to contact us is through our website here. You can also email us at sales@armainstruments.com. Soon, we will expand access to our technology through strategic partnerships and resellers with organizations that have a worldwide footprint.

Also Read:

CEO Interview: Orr Danon of Hailo

CEO Interview: David Heard of Infinera

CEO Interview: Dr. Matthew Putman of Nanotronics


Easy-Logic and Functional ECOs at #61DAC

Easy-Logic and Functional ECOs at #61DAC
by Daniel Payne on 08-01-2024 at 10:00 am

gtech min

I first visited Easy-Logic at DAC in 2023, so it was time to meet them again at #61DAC in San Francisco to find out what’s new this year. Steven Chen, VP Sales for North America and Asia met with me in their booth for an update briefing. Steven has been with Easy-Logic for six years now and earned an MBA from Baruch College in New York. This was the fifth year that they exhibited at DAC.

A functional Engineering Change Order (ECO) is a way to modify the gate-level netlist, post-synthesis with the least amount of disruption and effort to minimize costs. Fixing a logic bug post-silicon can often be remedied with a Post-Layout ECO, where spare cells can be connected with updated metal layers, keeping mask costs low for a new silicon spin.

Their booth display showed an EDA flow and where their four ECO tools fit into the flow.

Easy-logic Exhibit at #61DAC

Something new in 2024 is the GTECH design flow from Easy-Logic, it’s a way to simplify the functional ECO flow for ASIC designs. With GTECH the user can quickly identify the ECO points, the gate-level circuits to be modified, over the traditional RTL-to-RTL design flow. With the EasylogicECO tool you continue to employ the smallest ECO patch size by refining the ECO point, reducing design complexity and speeding the time required. Using the GTECH design approach fits into your existing EDA tool flows, making it quick to learn and use. There was a press release about GTECH in May 2024 and at DAC they were showing demonstrations of this capability.

Steven talked about how even a junior engineer can use this tool easily, and that when silicon comes back from the fab and isn’t working 100% that a full re-spin can require 100 mask changes, while a metal ECO can use only 30-40 mask changes, so a much lower cost to implement. In the old days engineers used to do manual ECO changes, but that approach required too much engineering effort and was error prone. With an automated approach it dramatically improves the chance of success. In most cases where a designer may give up on a manual metal ECO if the spare cells needed are expected to exceed 50 because the signal was flatten and is too hard to trace.  With the capability of producing a smaller patch with quicker runtime, EasylogicECO can help designers increase their success rates on metal ECO projects.  This approach is widely adopted by most IC design houses in Asia, focusing on cost reduction by having fewer metal layers change and produces the quickest product launch.  EasylogicECO is playing an important role driving this metal ECO approach.

The EasylogicECO tool works in process nodes that are planar CMOS, FinFET, even with leading processes like 3nm.  Semiconductor Review APAC magazine recognized Easy-Logic as one of the top 10 EDA vendors in July 2023.

Summary

Easy-Logic was founded in 2013, based in Hong Kong and had their first order by 2018 for ECO tools. By 2021 they expanded into an R&D center in Shenzhen, adding new ECO products. Today they have four ECO tools and over 40 happy customers from around the globe. Adding the GTECH design flow this year makes it even easier to use their ECO tool, so their momentum continues to grow in the marketplace. I look forward to watching their technology and influence expand.

Related Blogs

 


proteanTecs Introduces a Safety Monitoring Solution #61DAC

proteanTecs Introduces a Safety Monitoring Solution #61DAC
by Mike Gianfagna on 08-01-2024 at 6:00 am

DAC Roundup – proteanTecs Introduces a Safety Monitoring Solution

At #61DAC it was quite clear that semiconductors have “grown up”. The technology has taken its place in the world as a mission-critical enabler for a growing list of industries and applications. Reliability and stability become very important as this change progresses. An error of failure  is somewhere between inconvenient and life-threatening. The field of automotive electronics is a great example of this metamorphosis. We’ve all heard about functional safety standards such as ISO26262, but how do we make sure these demanding specs are always met?  proteanTecs is a company offering a unique technology that provides a solution to these growing safety demands. During DAC 2024, you could see product demos showcasing automotive predictive and prescriptive maintenance. Read on to learn how proteanTecs introduces a safety monitoring solution.

Making the Roads Safer

proteanTecs defines a category on SemiWiki called Analytics. The company was founded with a mission to give electronics the ability to report on its own health and performance. It brings together a team of multidisciplinary experts in the fields of chip design, machine learning and analytics software with the goal of monitoring the health and performance of chips, from design to field. Its products include embedded monitoring IP and a sophisticated array of software, both embedded on chip and in the cloud. All this technology works together to monitor the overall operating environment of the chip to ensure top performance and to spot or predict problems before they become showstoppers.

News from the Show Floor

At the show, I was fortunate to have the opportunity to meet with Uzi Baruch, Chief Strategy Officer and Noam Brousard, VP of Solutions Engineering. It was a memorable and far-reaching discussion of the contributions proteanTecs is making to facilitate continued scaling for the electronics industry.

One comes to expect polished presentations at a show like #61DAC and indeed that was part of the meeting. I was also treated to a very entertaining and informative video; a link is coming. But perhaps the most impressive part was the live demonstration of the company’s technology. This is a brave move for any company at a major trade show. The solid performance of the demo spoke volumes about the reliability of the technology. Let’s look at some of the details.

proteanTecs RTSM™ (Real Time Safety Monitoring) offers a new approach to safety monitoring for predictive and prescriptive maintenance of automotive electronics. The application monitors the timing margin of millions of real paths of the chip with very high coverage in real-time, under real workloads, to alert the system before the lowest point that still allows error-free reaction. More details of this approach are shown in the figure below.

There are many aspects of system operation that must be monitored and analyzed to achieve the required balance for system reliability and performance. A combination of embedded sensors, sophisticated software and AI make it all work.  The following list will give you a feeling for the completeness of the solution:

  • Monitor non-stop: Remains always-on and monitors in-mission mode
  • Assess issue severity: A performance index for risk severity grading
  • Detect logical failures: Monitor margins with critical protection threshold
  • Boost reaction time: Low latency of the warning signals
  • Prevent fatal errors: A prescriptive approach for avoiding failures
  • Customizable outputs: Configure multiple output interfaces to fine-tune desired dynamic adjustment

An example of this operation is shown in the figure below. RTSM outputs a Performance Index, as well as a notification targeting the device’s power/clock frequency management units. The Performance Index indicates how close the device is to failure (predictive). The warning notification helps adapt the voltage or frequency to overcome the risk of incoming failure (prescriptive). Similarly, as with any other request or input for dynamic power/clock frequency management, the RTSM output is customized to the specific system interfaces.

To Learn More

I have only scratched the surface of the capabilities offered by proteanTecs. If a closed-loop predictive and prescriptive system sounds like an important addition to your next design, you need to get to know these folks. You can start with that short, entertaining and informative video here.

There is also a comprehensive white paper available. Highlights of this piece include:

  • The limitations of conventional safety assurance techniques
  • RTSM’s algorithm-based Performance Index for assessing the issue severity
  • Why monitoring margins under real workloads is crucial for fault detection
  • The technology behind RTSM which allows it to monitor in mission-mode
  • The role of RTSM in introducing Predictive and Prescriptive Maintenance

You can get your copy of the white paper here. And that’s how proteanTecs introduces a safety monitoring solution at #61DAC.

Also Read:

proteanTecs at the 2024 Design Automation Conference

Managing Power at Datacenter Scale

proteanTecs Addresses Growing Power Consumption Challenge with New Power Reduction Solution


Semiconductor CapEx Down in 2024, Up Strongly in 2025

Semiconductor CapEx Down in 2024, Up Strongly in 2025
by Bill Jewell on 07-31-2024 at 4:00 pm

unnamed (20)

The U.S. CHIPS and Science Act provides incentives for semiconductor manufacturing in the United States. As of July 30, 2024, The CHIPS Program Office has announced over $30 billion in grants and over $25 billion in loans, according to the Semiconductor Industry Association (SIA). The awards have been given to fourteen companies; however, five companies have accounted for the vast majority of the funds as shown below. These fab projects are also receiving state and local subsidies. The total investment in these ventures will be over $284 billion. The timing of these projects varies, with some scheduled for completion in 2025. Other fabs will be finished over the next two to seven years. In addition to the companies listed below, Texas Instruments is in the process of applying for CHIPS Act funding for its planned wafer fabs in Sherman, Texas and Lehi, Utah.

What impact will the CHIPS Act awards have on semiconductor capital spending over the next few years? The companies would have certainly built these fabs without the CHIPS money. Companies plan fabs based on their capacity needs to meet their business plans. The CHIPS funds likely had an impact on the location of some of the wafer fabs. TSMC and Samsung may not have located their new fabs in the U.S. without the CHIPS Act money. The CHIPS Act awards may also have moved some of these investments forward a year or two. The effects of the CHIPS Act awards are not likely to be significant in 2024 but will likely boost 2025 CapEx (capital expenditures).

The U.S. is not the only country to subsidize its semiconductor industry. According to Bloomberg, planned semiconductor investments include $46 billion from the European Union (EU), $21 billion from Germany, $142 billion from China, $55 billion (in tax incentives) from South Korea, $25 billion from Japan, $16 billion from Taiwan, and $10 billion from India.

Our Semiconductor Intelligence estimate of total semiconductor CapEx in 2024 is $166 billion, down 2% from 2023. We are projecting an 11% increase in CapEx in 2025 to reach $185 billion, surpassing the all-time high of $182 billion in 2022.

Two of the major memory companies, SK Hynix and Micron Technology, are planning double-digit CapEx increases in 2024, while Samsung is guiding for a slight decrease. SK Hynix and Micron are projecting significant CapEx growth in 2025, with SK Hynix at 75% and Micron at 47%.

The dominant independent foundry company, TSMC, plans a 3% cut in 2024 CapEx and a 10% increase in 2025 based on the mid-point of its guidance. SMIC expects no change in CapEx in 2024 while UMC plans a 10% increase. GlobalFoundries will cut CapEx 61% in 2024 but should increase it significantly in 2025 as it begins construction on its $11.6 billion wafer fab project in Malta, New York.

The largest integrated device manufacturer (IDM), Intel, projects a 2% increase in 2024 CapEx. Texas Instruments is sticking to its plan to spend an average of $5 billion on CapEx over the next few years. STMicroelectronics and Infineon Technologies both plan CapEx cuts in 2024 after strong increases in 2023.

Our forecast of 11% growth in 2025 semiconductor CapEx may be on the conservative side. Just the plans from TSMC, Micron and SK Hynix account for two-thirds of the $19 billion CapEx increase from 2024 to 2025. Samsung, the largest spender, will likely increase its CapEx substantially in 2025 to maintain its memory market share and increase its foundry business, which is second to TSMC. In its June 2024 forecast, SEMI projected a 17% increase in spending on 300mm fab equipment in 2025 after a 6% increase in 2024. WSTS’ June 2024 forecast called for semiconductor market growth of 16% in 2024 and 12.5% in 2025. Our upside projection is a 20% increase in 2025 CapEx.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry  –  manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Automotive Semiconductor Market Slowing

2024 Starts Slow, But Primed for Growth

Electronics Turns Positive

 


Defacto Technologies and ARM, Joint SoC Flow at #61DAC

Defacto Technologies and ARM, Joint SoC Flow at #61DAC
by Daniel Payne on 07-31-2024 at 10:00 am

arm defacto 61dac min

At #61DAC I stopped by the Defacto Technologies exhibit and talked with Chouki Aktouf, President and CEO, to find out what’s new in 2024. ARM and Defacto have a joint SoC design flow by using the Arm IP Explorer tool along with Defacto’s SoC compiler, which helps to quickly create your top-level RTL, IP-XACT and UPF files. This tool flow enables an engineer to define an Arm-based system architecture by selecting from various IP cores from a catalog, then parameterize the core. Adding custom IP blocks complete the system design.

ARM and Defacto joint design flow

This more automated approach saves many weeks of manual effort, where making changes and updating files is now simplified.

Using Defacto tools like SoC Compiler your team can also configure SoC designs with RISC-V cores. Large chip assembly time can be up to 30X faster by using Defacto. CAD groups can control the SoC Compiler tool with their favorite scripting languages:

  • Python
  • Tcl
  • Java
  • Ruby
  • C++

I learned that Defacto presented a poster session at DAC, “New SoC Creation Flow based on Extraction and Recreating from Previous SoC”. There are AI customers using SoC Compiler, but I cannot mention any names yet, so stay tuned.

Defacto exhibit at #61DAC

During RTL DFT signoff, there are checks for testability and test coverage evaluation, so that you find any test related issues early in the design cycle when coding RTL. You can even explore moving test points around and see the impact on implementation. Designers can also simulate their peak power during RTL, instead of waiting for gate-level implementation, saving time and providing critical feedback.

With a general shortage of SoC engineers, using automation from EDA tools like SoC Compiler is another way to keep projects on schedule.

Chouki told me that their EDA spinout, Innova, has a tool to predict how many EDA licenses and compute resources will be required for any SoC project. The Innova PDM has an AI engine and is being used first in Europe with initial customers and will soon expand to more geographic regions. The whole idea with Innova PDM is to reduce project costs by better planning metrics.

Summary

At SemiWiki we’ve been blogging about Defacto since 2016 and every year they continue to steadily add new EDA tool features and growing their spinout company Innova. The biggest news for 2024 has to be the joint design flow with Arm and being included in the partner ecosystem catalog. The enthusiasm of talking with Chouki Aktouf is simply contagious and brings a smile to my face, so plan to give this company a look and follow up with a visit or call.

You can find Defacto at shows like DAC, IP-SoC, ITC, ChipEx.

Related Blogs


Theorem Proving for Multipliers. Innovation in Verification

Theorem Proving for Multipliers. Innovation in Verification
by Bernard Murphy on 07-31-2024 at 6:00 am

Innovation New

An explosion in multiplier types/combinations lacking well-established C reference models for equivalence checking is prompting a closer look at theorem proving methods for verification. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Sound and Automated Verification of Real-World RTL Multipliers. This article was reported in the 2021 FMCAD conference. The authors are from UT Austin.

Multipliers are cropping up everywhere: in AI for MACs (multiply-accumulate) and dot products, in elliptic curve cryptography, and in countless applications in signal processing (filters and FFTs for example). What is distinctive about these multipliers is size (up to 1024×1024 in some instances) and novel architectural complexity. For example, a naïve architecture for a MAC (a multiply function followed by an add function) is too slow and too large for state-of-the-art designs; current implementations fuse multiply and add functions for faster and smaller implementations.

While these new approaches are faster and more area efficient, they are correspondingly more challenging to fully verify. Equivalence checking is the verification method of choice but depends on well-established reference models for comparison. The authors propose a formal approach based on the ACL2 theorem prover to soundly verify correctness in very respectable run times. FYI ACL2 is in the same general class of proof assistants as Coq, Isabelle, Lean and others.

Paul’s view

Switching back from AI to formal verification this month. Most commercial formal tools use SAT and BDD, but there is a niche market for alternative “theorem prover” based tools, especially to verify complex arithmetic circuits. Theorem provers convert a circuit into an equation by walking through it from left to right, building an equation for each gate output based on the equations for its inputs (a technique known as “symbolic simulation”). The resulting circuit equation is then iteratively transformed through a series of rewrite rules (literally like a human would perform a proof) until, hopefully, it is identical to the expected equation for the overall circuit.

Theorem provers are susceptible to equation size explosion. Multiplier circuits, especially complex MAC and dot-product circuits with truncation and right-shifted outputs, as might be needed for an AI accelerator, are especially problematic. This paper proposes a new set of rewrite rules that can prove correctness on several such complex circuits in a matter of a few minutes where prior methods explode and fail to complete.

The authors’ key idea is to represent all the circuit equations in a carry-save-like notation by defining two functions, a “c” carry function c(x) = ROUND_DOWN(x/2)  and a “s” sum function s(x) = MOD_BASE2(x). The propagation equation for gates is reworked to using these two functions as much as possible. So for example, for a full adder with inputs a and b, the output equations would normally be c_out = a*b, and s_out = a+b – a*b. In c-s notation they would be c_out = c(a+b) and s_out = s(a+b).

The authors then define various c-s rewrite rules, e.g. c(s(x)+y) = c(x+y) – c(x) which are applied continuously as the circuit is being walked. The combination of the c-s notation and these rewrite rules has a transformational effect on the complexity of the equation that is built. Really nice paper, well written – the idea makes such good intuitive sense that I can’t help but wonder how it has not been tried before! Will definitely be following up with the Jasper formal team here at Cadence.

Raúl’s view

Temel and Hunt take us on a grand tour of designing and verifying multipliers. They explore different multiplier architectures, such as standalone multipliers with full, truncated or right shifted outputs, integrated multipliers, MACs, dot-products, various encodings, and so on. The paper shows a sample design of multiplier modules that have multiple uses. They present results that include the following architecture elements:

  • Partial product generation algorithms, such as simple partial products, Booth encoding radix-4, or radix-2.
  • Summation tree reduction algorithms such as counter-based Wallace, array, Dadda, traditional Wallace, overturned-stairs, balanced delay, redundant binary addition, 4-to-2 compressor and 7-to-3 compressor trees, and merged multipliers with Dadda tree.
  • For final stage addition, these multipliers implement Kogge-Stone, ripple-carry, Brent-Kung, Han-Carlson, Ladner-Fischer, carry-select, conditional sum, variable-length carry-skip, block carry-lookahead. and regular carry-lookahead adders

The main section of the paper is on verification. It builds on previous work on a term rewriting algorithm that can verify a wide range of isolated multiplier designs. It extends this to cover the multipliers described above and to return counterexamples for buggy designs. It is based on a term rewriting (replacing terms according to a set of rules) technique the authors call s-c term rewriting, where the s stands for sum and the c stands for carry. They use ACL2, an interactive and automated theorem proving system.

Results are reported for over 60 isolated multipliers, other configurations and MACs from 16 to 1024 bits. They all complete in times that range for fractions of a second to 300 seconds for a 1024×1024 multiplier and 356 seconds of a 256×256 MAC and compare favorably to other state-of-the-art work (which is orders of magnitude slower and times out at 90 minutes for several designs).

It takes some time to read the paper, it is thoroughly researched but not totally self-contained. The 37 references cover standards for multiplier designs (e.g. Brent-Kung, Wallace…) and the state-of-the-art of formal verification (BDDs, BMDs, SAT, SMT Computer Algebra Methods). The reader who is not so interested in this detail can focus on the results and conclusions which are impressive and advance the state of the art.


TSMC’s Business Update and Launch of a New Strategy

TSMC’s Business Update and Launch of a New Strategy
by Claus Aasholm on 07-30-2024 at 10:00 am

TSMC Fab Utilization 2024

What looks like a modest market expansion strategy is all but modest.

Insights into the Semiconductor Industry and the Semiconductor Supply Chain.

As usual, when TSMC reports, the Semiconductor industry gets a spray of insights that help understand what goes on in other areas of the industry. This time, TSMC gave more insight into their new Foundry 2.0 strategy, which will be covered later in this post.

The Q2-2024 result was a new revenue record indicating that the Semiconductor industry is out of the downcycle and ready to aim for new highs.

However, TSMC’s gross and operating profits have not returned to the same levels as last time, when revenue was over $20B/qtr. This is a new situation that needs to be uncovered.

Semiconductor manufacturing companies need to spend significant capital every quarter to maintain and service their equipment. Spending at the maintenance capex level ensures that manufacturing capacity does not decline.

From the end of 2020 until the end of 2023, TSMC made a significant capex investment above maintenance. The company then dropped capex to just above maintenance. This capacity is now flowing online, which has lowered TSMC’s utilisation revenue. The TSMC of Q2-24 has a lot more capacity at the last peak.

TSMC’s management did report increasing manufacturing utilisation, which means there is still spare capacity, although it might not be the capacity that TSMC needs.

There were other levers of Gross margin revealed in the investor call.

While the increasing manufacturing activity combined with the payment of Subsidies and selective price increases lifted the gross margin, there were also headwinds.

Inflation is increasing the cost of materials. As Taiwan’s largest electricity consumer, TSMC depends on grid expansion to fuel future growth. The investment in new and cleaner electricity is increasing electricity prices.

Also, the higher operating costs of the future manufacturing facilities in Arizona and Kumamoto would negatively impact gross margins.

Lastly, the company mentioned the conversion from 5 nm to 2 nm. It was earlier indicated that this was only Apple, but now it looks like TSMC is under great pressure from more of its HPC customers to migrate to 2nm.

Migrating customers takes time and effort, and it also takes time before manufacturing is sufficiently stable to generate good yields and become economically viable.

The market view

Unsurprisingly, TSMC is increasingly becoming THE supplier to the high-performance computing industry, as seen in the Q2-24 share of divisional revenue. Mobile is still significant, mainly due to Apple, but it is decreasing in share.

A revenue timeline shows the growth in Q2-24 comes from a step function increase in HPC revenue.

The annual growth rate for High-Performance Computing has been impressive, but the quarterly growth rate is even higher. This represents 145% CAGR in HPC.

HPC’s revenue share is increasing relentlessly, and TSMC is becoming a high-performance computing company. This is one of the drivers towards TSMC’s new Foundry 2.0 strategy.

Technology

While Apple has made a long-term commitment to TSMC to obtain exclusivity to the new 2N process, this is not likely to last as long as Apple’s exclusivity to the 3nm process, which has lasted for a year.

TSMC expect the business transition to 2nm will be faster and involve more products than the transition to 3 and 5nm combined over the first two years.

This means more TSMC clients than just Apple (from 3nm) want to get to 2nm. Not surprisingly this will be Nvidia (from 4/5nm), AMD & Intel (from 5nm) as the main clients

It took the 3-5nm business four years to reach 50% of the total revenue, while it only took 3nm 2 quarters to get 15%.

A comparison between HPC and 3nm revenue shows a similar trajectory.

As Apple has been the only 3nm customer up until now, it would be natural to assume that the growth spike is due to Apple, but this is likely not the case.

Apple being a consumer oriented company has a very specific buying pattern due to the seasonality of its business.

While the Apple Cogs also represent mobile business and other, this pattern can be seen in the TSMC 3nm business also. Q3 and Q4 up and Q1 down.

You would expect the Apple 3nm business to go down in Q2 also. It likely did but TSMC’s 3nm business grew by 84% in Q2-24 so something else is going on.

The jump in revenue is likely to come from one of the 5nm customers of and as the 5nm revenue did not decline, it is a new product.

While it could be Nvidia, the AI giant is likely busy selling Blackwell products that is based on TSMC’s 5nm (4) process.

More likely this is Intel’s Lunar Lake or AMD’s Instinct series or an upgrade of the Zen 5. Both Intel and AMD is reporting soon and this article will be updated. From a strategic perspective, TSMC is moving from few customers using the leading edge technology to many. This also means TSMC is getting more important for its customers in High Performance Computing.

Technology Development

There is a good reason TSMC’s clients want to get to 2nm and even better technologies (N16). The performance gains are significant.

The relative performance improvements (in layman’s terms) can be seen below. Power Improvements (at similar speed) or Speed improvement (at similar power):

N16 is best used for specific HPC products with complex signal routes and dense power delivery and work. Volume production is scheduled for the second half of 2026

TSMC normally introduces intermediary upgrades for each of their processes and the benefits can be significant as seen in the N2P process. It is almost like an entire new process node but with less risk and cost. It will be incredibly attractive for the AI GPU combattants to get to these nodes as fast as possible. The balance of power is leaning more towards TSMC.

Cowos Capacity

From a strategic perspective, advanced packaging is becoming incredibly important and the main driver behind the Foundry 2.0 Strategy

Even though TSMC is adding as much advanced packaging technology as possible, it is nowhere near fulfilling the demand. TSMC expect to grow capacity by 60% CAGR but will not be able to meet demand before sometime during 2026 at best.

Margins have been low but are improving to a level close to corporate average margin as yields improve. CoWoS is the main reason that TSMC is changing its strategy to 2.0. All of the HPC customers will need advanced packaging to integrate High Bandwidth memory on an interposer. Later on this will be a need for PC processors and everything else AI.

The new 2.0 Foundry Strategy:

While the Foundry 2.0 strategy looks like a market expansion strategy from the $125B (2023) Foundry markets to add the packaging market of $135B bringing the total addressable market for TSMC to $250B. This changes TSMC’s market share from 55.3% to 28% in the new definition.

Apart from market expansion, Foundry 2.0 also aligns closely with the changed need of the top HPC customers, Apple, Nvidia, Intel, AMD and Broadcom. TMSC can basically deliver everything but the memory element of the CPU and GPU boards.

From a technology perspective, the move makes TSMC less dependent of the continuation of Moore’s law predicting continously smaller 2D geometries as the advanced packaging effectively opens up for 3D integration and technology advancement.

It represents the transformation of TSMC from a components company to a subsystems company, just like Nvidia’s transformation from GPU to AI Server boards.

As Nvidia developed Blackwell, it became obvious that the silicon for the GPU itself got diluted. The introduction of more memory, Silicon interposers and large slabs of advanced substrates, made the GPU share of the BOM decline. The Foundry 2.0 strategy is also aimed at controlling more of the supply chain in order to maintain TSMC’s importance as supplier to the CPU and GPU customers.

The capital allocation strategy, reveals the current fiscal importance of each of the main areas of TSMC business. If we didn’t know it, TSMC is still an advanced logic node company and that will continue. The new advance packaging, test and mass making (assembly??) will be allocated 10% of the total CapEx budget which is 31B$ in 2024.

While this sounds modest, the capital requirements for the Test and Packaging (OSAT) companies is a lot less than for semiconductor manufacturing. The largest OSAT companies are ASE and Amkor and they have CapEx spend of and estimated 2.5B$ in 2024. TSMC is dead serious about entering this industry and the established companies need to be on their toes.

Conclusion

TSMC’s new strategy has a title that completely lacks imagination but the strategy itself is very well developed and also very ambitious. While Intel and Samsung are busy figuring out how to get their advanced foundry nodes to work and finding customer for them, TSMC is expanding its silicon leadership into advanced packaging becoming a more important supplier to the key AI customers. This will also increase TSMC’s bargaining situation making the company able to command more of the value generation in AI if TSMC is not as modest and humble as normal.

Also Read:

TSMC Foundry 2.0 and Intel IDM 2.0

Q&A With TSMC on Next-Gen Foundry

Will Semiconductor earnings live up to the Investor hype?

 


CAST, a Small Company with a Large Impact on Many Growth Markets #61DAC

CAST, a Small Company with a Large Impact on Many Growth Markets #61DAC
by Mike Gianfagna on 07-30-2024 at 6:00 am

DAC Roundup – CAST, a Small Company with a Large Impact on Many Growth Markets

Semiconductor IP has continued to grow as a market, and it was clearly a star performer at #61DAC.  We all know the large suppliers of IP for semiconductors, but the market is actually quite diverse, with many players supporting many applications.  I had a chance to meet with two executives from CAST, a company with a remarkably diverse product line, a unique business model, and incredibly long staying power. I’d like to share some of what I learned during my meeting with CAST, a small company with a large impact on many markets.

History and Strategy

Nikos Vervas

My meeting was with Nikos Zervas, CEO and Paul Lindemann, a marketing consultant at CAST.  Nikos has been with CAST for 24 years. Prior to joining CAST, he founded Alma Technologies after receiving his Ph.D. in low power VLSI from the University of Patras in Greece. Paul has been consulting with CAST for over 30 years. His prior experience includes GTE Laboratories, Silc Technologies as a co-founder and Racal-Redac. The knowledge of semiconductors and IP design possessed by these two gentlemen is substantial.

CAST was founded in 1993, so the company has been selling IP well before the IP market really existed. The collective experience of the company in general and the leadership team in particular is one of the things that sets CAST apart.

Paul Lindemann

Another thing that makes CAST unique is its business model. You can learn all the details in this interview with Dr. Zervas on SemiWiki here. I will summarize the key points:

  • CAST has a relatively small direct team – less than 30 people.
  • CAST sells and supports IP developed by its own engineers as well as that developed by several close partners. The collective staff of its partner network is over 100.
  • The partners bring technical expertise in specific areas while CAST adds quality standards and assurance, marketing, sales, and front-line support.
  • Partner IP is treated the same as CAST IP. All must pass rigorous quality checks and have extensive documentation.
  • CAST’s experienced front-line support team handles many issues directly, but the original IP developers are always available to help customers when needed.
  • The support record for CAST is stellar – first response is typically under 24 hours and resolution is typically under three days.
  • The support team for CAST has a worldwide footprint, which helps to deliver the statistics cited.

The above list is what makes CAST such a potent IP supplier. Its unique approach to partnering with IP companies creates a vast catalog with a very personal and high touch feel from a support perspective.

The CAST IP Catalog – Spotlight on Automotive

CAST has developed an extensive IP catalog over the past 30 years. Nikos and Paul mentioned automotive, compression and processors (e.g., RISC-V) as key areas. They went on to point out CAST also serves many customers in the defense & mission critical, industrial automation, and consumer markets. The company footprint is much larger than this, with over 15 major IP categories and many titles within each category.

Let’s look at some of the support for automotive applications.

Processor IP

EMSA5-FS – 32-bit embedded RISC-V Functional Safety Processor. This Harvard architecture processor implements a single-issue, in-order, 5-stage execution pipeline, supporting the RISC-V 32-bit base integer instruction set (RV32I), or the 32-bit base embedded instructions set (RV32E).

The part is ISO 26262 ASIL-D ready and includes a complete certification package with FMEDA and SAM documents. Fail-safe features include modular redundancy, ECC, reset and safety manager modules. It also contains a memory protection unit with up to 16 regions of configurable size.

Automotive Bus Controllers

CAST has led the market with very early CAN and recently TSN Ethernet IP cores. The company’s automotive interconnect offerings today include the following:

CAN-CTRL (CAN CC, CAN FD, and CAN XL Bus Controller). The CAN-CTRL implements a highly featured and reliable Controller Area Network (CAN) bus controller that performs serial communication according to the Controller Area Network (CAN) protocol.

CAN-SEC (CANsec Acceleration Engine). The CAN-SEC IP core implements a hardware accelerator for the CANsec extension of the CAN-XL protocol, as defined in CiA’s 613-2 specification.

CSENT (SENT/SAE J2716 Controller). The CSENT core implements a controller for the Single Edge Nibble Transmission (SENT) protocol. It complies with the SAE J2716 standard and is capable of driving pulses to trigger synchronous Sensor type. It can be used for conveying data from one or multiple sensors to a centralized controller using a single SENT line.

LIN-CTRL (LIN Bus Master/Slave Controller). This IP implements a communication controller that transmits and receives complete Local Interconnect Network (LIN) frames to perform serial communication according to the LIN Protocol Specification.

TSN-EP (TSN Ethernet Endpoint Controller). The TSN-EP implements a configurable controller meant to ease the implementation of endpoints for networks complying to the Time Sensitive Networking (TSN) standards.

TSN-SE (TSN Ethernet Switched Endpoint Controller). The TSN-SE implements a configurable controller meant to ease the implementation of switched endpoints for Time Sensitive Net-working (TSN) Ethernet networks.

TSN-SW (Multiport TSN Ethernet Switch). The TSN-SW implements a highly flexible, low-latency, multiport TSN Ethernet switch. It supports the hardware functionality for Ethernet bridging according to the IEEE 802.1Q standard and implements the essential TSN timing synchronization and traffic-shaping protocols (i.e. IEEE 802.1AS-2020, 802.1Qav, 802.1Qbv, and 802.1Qbu, 802.1br).

To Learn More

The list above just scratches the surface of what CAST has to offer for automotive design. There are many other markets served by the company, including processors and compression as mentioned, plus encryption and security, and most popular interfaces and peripherals.

You can get a broad overview of the CAST IP catalog here.  You can also access a recent white paper entitled Popular CAN Bus Controller Core Passes Another Rigorous Plugfest here. And that’s some of what I learned at #61DAC about CAST, a small company with a large impact on many markets.


AMIQ EDA Integrated Development Environment #61DAC

AMIQ EDA Integrated Development Environment #61DAC
by Daniel Payne on 07-29-2024 at 10:00 am

AMIQ EDA min

I stopped by the AMIQ EDA booth at DAC to get an update from Tom Anderson about their Integrated Development Environment (IDE), aimed at helping design and verification engineers save time. In my early IC design days we used either vi or emacs and were happy with having a somewhat smart text editor. With an IDE you get a whole new way of creating clean RTL code quicker, and with that code being checked for correctness before simulation or synthesis you save time and money from using expensive EDA tool licenses too early.

AMIQ EDA at #61DAC

With an IDE, RTL code doesn’t have to be perfect before you can find and fix multiple syntax, typing and even connectivity errors. A simulator or synthesis tool will simply stop after finding the very first error, but not DVT Eclipse IDE and DVT IDE for Visual Studio (VS) Code. To get your code clean quicker the IDE automates things like auto-completion of names, variables, objects or net names by letting you choose from a pop-up list. Auto-suggestions and templates are provided for common things like lists, loops, if-then-else and other statements, and commonly used assertions and UVM compliant elements so you end up with fewer errors in your RTL design or testbench code.

All the popular IC design and verification languages are supported in the IDE:

  • Verilog
  • SystemVerilog
  • VHDL
  • Partial Verilog-AMS
  • PSS
  • PSL
  • UPF/CPF
  • e Language

AMIQ Consulting was founded in 2003 as a verification services provider, and AMIQ EDA is a spin-off that started shipping their first product in 2008, after they used their own IDE internally for everyday real life projects. They now have customers around the world, with representatives providing local support as needed. The IDE runs on Linux, Windows, and macOS.

For engineers transitioning from vi and emacs, you can re-use some of your favorite shortcuts to speed the learning curve. Evaluating DVT IDE is pretty quick and simple by visiting their Download page to get started.

Shown below is the GUI for the IDE, and on the left you can view your hierarchy, in the middle is the color-coded source code editor, and on the right side you can even view connectivity of your code as a diagram or even a Finite State Machine (FSM). Color coding helps you see the syntax more clearly, and there are even links to class definitions. The coolest feature is how fast this IDE does incremental compilation, so that any typing errors get highlighted instantly.

DVT Eclipse

Complex operations like refactoring your code are supported, so you can rename a method for example, then see where all changes get triggered, and then all method calls are quickly changed. Any fixes required in your code are auto-suggested, so you get to choose what’s appropriate, instead of being surprised. The IDE really acts like an expert system, where it uses predictable intelligence, which is deterministic.

One demo that Tom showed me was how source code for an FSM could automatically generate a diagram, making the state transitions more understandable and quicker to debug and verify. This IDE also has schematic connectivity across the entire design hierarchy, where you can click on any signal then show the source code for that signal. Engineers can traverse the hierarchy, up and down, to quickly clarify design intent.

Personal preferences like color coding and even dark mode are accessible. Users can view types, instances, packages and members just by clicking. Version control for check-in and check-out is supported in a tool flow using your favorite data management tools.

Summary

Design and verification engineers that are ready to get clean code faster should check out what the DVT Eclipse IDE and DVT IDE for VS Code have to offer. Evaluations are easy to do with a minimum of paperwork, and you’ll soon get to visualize your own RTL code for design and verification tasks, just to see how much more efficient an IDE can be.

Related Blogs


Alphawave Semi Tapes Out Industry-First, Multi-Protocol I/O Connectivity Chiplet for HPC and AI Infrastructure

Alphawave Semi Tapes Out Industry-First, Multi-Protocol I/O Connectivity Chiplet for HPC and AI Infrastructure
by Kalar Rajendiran on 07-29-2024 at 6:00 am

Industry First, Multi Protocol IO Connectivity Chiplet

In the rapidly evolving landscape of high-performance computing (HPC) and artificial intelligence (AI), the demand for increased processing power, efficiency, and scalability is ever-growing. Traditional monolithic chip designs are increasingly unable to keep pace with these demands, leading to the emergence of chiplets as a revolutionary approach. Chiplets can be combined to create a complete system, offering significant advantages in flexibility, performance, and cost efficiency. They enable the creation of highly customized solutions tailored to specific workloads, making them particularly valuable for HPC and AI applications where performance and efficiency are paramount.

For a thriving chiplet ecosystem, it is crucial to address both technical and business factors comprehensively. Alphawave Semi’s recent achievement in the industry-first tape-out of a multi-protocol I/O connectivity chiplet delivering 1.6Tbps throughput underscores this.

Technical and Business Dynamics of a Chiplet Ecosystem

The success of chiplets in HPC and AI infrastructure hinges not only on technical advancements but also on robust business considerations. This dual focus is pivotal in meeting the diverse needs of industries reliant on cutting-edge computing capabilities.

Technical Dynamics

Advanced interconnect technologies for high bandwidth and low latency, standardized and interoperable designs, and sophisticated 3D and heterogeneous packaging solutions are all crucial. Efficient power delivery and dynamic management, effective thermal solutions, comprehensive design tools, robust testing protocols, scalable and customizable architectures, seamless integration with existing systems, and strong security measures are essential. These technical factors collectively ensure the efficiency, performance, reliability, and flexibility necessary to support diverse applications in modern computing environments.

Business Dynamics

The availability of a diverse range of ready-to-use chiplets is crucial for a thriving chiplet ecosystem, fitting into several business factors such as market demand and customer engagement. Ensuring a wide array of chiplets caters to various industries, enhances market growth, and maintains competitiveness. This diversity in chiplet offerings ensures the ecosystem’s adaptability and responsiveness to evolving industry demands.

Industry standards and interoperability, strong collaboration and partnerships, robust and scalable supply chains, and cost-efficient manufacturing are all essential. Continuous innovation and advanced R&D, diverse application support, and flexible IP licensing are crucial. Ensuring regulatory compliance, maintaining high-quality assurance, attracting and retaining skilled talent, and raising market awareness further support growth. These business factors collectively drive the development, adoption, and sustainability of chiplet technology in various high-performance computing and AI applications.

Industry-First Multi-Protocol I/O Connectivity Chiplet

Alphawave Semi’s recently announced multi-protocol I/O connectivity chiplet delivering 1.6Tbps supports PCIe, CXL, Ethernet, and proprietary high-speed links, offering unparalleled versatility and performance. This versatility ensures seamless integration across diverse computing environments and is poised to revolutionize data transfer efficiency, enabling faster AI model training, more robust HPC workflows, and scalable infrastructure solutions. By supporting a spectrum of communication protocols such as PCIe, CXL, and Ethernet at cutting-edge speeds, the chiplet empowers data centers, AI accelerators, and high-performance computing platforms with enhanced flexibility and scalability. The chiplet’s high bandwidth and low latency are particularly beneficial for AI workloads, facilitating faster training and more efficient inference processes. This can accelerate the development and deployment of AI models, pushing the boundaries of what is possible in machine learning and data analytics.

Alphawave Semi’s Solutions

Alphawave Semi brings both breadth and depth to the chiplet ecosystem. Its comprehensive portfolio of high-speed connectivity solutions and advanced packaging technologies ensures that chiplet-based systems can achieve unprecedented levels of performance and efficiency. By focusing on both technical and business factors, Alphawave Semi is driving the adoption and sustainability of chiplet technology. The company’s innovative R&D efforts, strategic partnerships and commitment to quality further strengthen the ecosystem.

Summary

As chiplets continue to evolve, their ability to integrate seamlessly with existing technologies and adapt to new applications will be crucial. With its innovative solutions and strategic approach, Alphawave Semi is well-positioned to lead the charge towards a more connected and intelligent world, driving the next wave of advancements in HPC and AI infrastructure.

For more details, visit Alphawave Semi website.

Also Read:

Driving Data Frontiers: High-Performance PCIe® and CXL® in Modern Infrastructures

AI System Connectivity for UCIe and Chiplet Interfaces Demand Escalating Bandwidth Needs

Alphawave Semi Bridges from Theory to Reality in Chiplet-Based AI