Synopsys IP Designs Edge AI 800x100

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions

Synopsys Collaborates with TSMC to Enable Advanced 2D and 3D Design Solutions
by Daniel Nenni on 09-29-2025 at 6:00 am

synopsys tsmc oip 2025 leading the next wave of ai and multi die innovation for tsmc advanced node designs

Synopsys has deepened its collaboration with TSMC certifying the Ansys portfolio of simulation and analysis tools for TSMC’s cutting-edge manufacturing processes including N3C, N3P, N2P, and A16. This partnership empowers chip designers to perform precise final checks on designs, targeting applications in AI acceleration, high-speed communications, and advanced computing. Additionally, the companies have developed an AI-assisted design flow for TSMC’s Compact Universal Photonic Engine (COUPE™) platform, streamlining photonic design and enhancing efficiency.

Multiphysics and AI-Driven Design Innovations

Synopsys and TSMC are advancing multiphysics analysis for complex, hierarchical 3DIC designs. The multiphysics flow integrates tools like Ansys RedHawk-SC™, Ansys RedHawk-SC Electrothermal™, and Synopsys 3DIC Compiler™ to enable thermal-aware and voltage-aware timing analysis. This approach accelerates convergence for large-scale 3DIC designs, addressing challenges in thermal management and signal integrity critical for high-performance chips.

For TSMC’s COUPE platform, Synopsys leverages AI-driven tools like Ansys optiSLang® and Ansys Zemax OpticStudio® to optimize optical coupling systems. These tools, combined with Ansys Lumerical FDTD™ for photonic inverse design, allow engineers to create custom components, suhttps://www.ansys.com/products/connect/ansys-optislangch as grating couplers, while reducing design cycle times and improving design quality through sensitivity analysis. This AI-assisted workflow is transformative for photonic applications, enabling faster development of high-speed communication interfaces.

Certifications for Advanced Process Technologies

The collaboration includes certifications for key Synopsys tools across TSMC’s advanced nodes. Ansys RedHawk-SC and Ansys Totem™ are certified for power integrity verification on TSMC’s N3C, N3P, N2P, and A16™ processes, ensuring reliable chip performance. Ansys HFSS-IC Pro™, designed for electromagnetic modeling, is certified for TSMC’s N5 and N3P processes, supporting system-on-chip electromagnetic extraction. These certifications enable designers to meet stringent requirements for AI, high-performance computing (HPC), 5G/6G, and automotive electronics.

Additionally, Ansys PathFinder-SC™ is certified for TSMC’s N2P process, offering electrostatic discharge current density (ESD CD) and point-to-point (P2P) checking. This tool enhances chip resilience against electrical overstress, accelerating early-stage design validation and improving product durability, particularly for complex 3DIC and multi-die systems. Synopsys is also working with TSMC to develop a photonic design kit for the A14 process, expected in late 2025, further expanding support for photonic applications.

Industry Impact and Strategic Partnership

This collaboration underscores Synopsys’ leadership in providing design solutions for next-generation technologies.

“Synopsys provides a broad range of design solutions to help semiconductor and system designers tackle the most advanced and innovative products for AI enablement, data center, telecommunications, and more,” said John Lee, vice president and general manager of the semiconductor, electronics, and optics business unit at Synopsys. “Our strong and continuous partnership with TSMC has been a key factor in maintaining our position at the forefront of technology while providing consistent value to our shared customers.”

“TSMC’s advanced process, photonics, and packaging innovations are accelerating the development of high-speed communication interfaces and multi-die chips that are essential for high-performance, energy-efficient AI systems,” said Aveek Sarkar, director of the ecosystem and alliance management division at TSMC. “Our collaboration with OIP ecosystem partners such as Synopsys has delivered an advanced thermal, power and signal integrity analysis flow, along with an AI-driven photonics optimization solution for the next generation of designs.”

Bottom line: By combining Synopsys’ simulation expertise with TSMC’s advanced process technologies this partnership accelerates the development of robust, high-performance chips, solidifying both companies’ roles in shaping the future of semiconductor design, absolutely.

The full press release is here.

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the leader in engineering solutions from silicon to systems, enabling customers to rapidly innovate AI-powered products. We deliver industry-leading silicon design, IP, simulation and analysis solutions, and design services. We partner closely with our customers across a wide range of industries to maximize their R&D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow. Learn more at www.synopsys.com.

Also Read:

Synopsys Announces Expanding AI Capabilities and EDA AI Leadership

The Rise, Fall, and Rebirth of In-Circuit Emulation (Part 1 of 2)

448G: Ready or not, here it comes!


Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic

Via Multipatterning Regardless of Wavelength as High-NA EUV Lithography Becomes Too Stochastic
by Fred Chen on 09-28-2025 at 10:00 am

Fred Chen EUV 2025

For the so-called “2nm” node or beyond, the minimum metal pitch is expected to be 20 nm or even less, while at the same time, contacted gate pitch is being pushed to 40 nm [1]. Therefore, we expect via connections that can possibly be as narrow as 10 nm (Figure 1)! For this reason, it is natural to expect High-NA EUV lithography as the go-to method for patterning at such small scales. However, High-NA’s resolution benefit is offset by its depth of focus [2,3]. A 20 nm defocus can be expected to reduce the width of a spot projected by a High-NA EUV system by 10%. Therefore, the resist cannot be expected to be thicker than 20 nm. This in turn, reduces the density of absorbed photons. A 20 nm thick chemically amplified resist is expected to absorb only 10% of the incident dose. For an incident dose of 60 mJ/cm2, only 4 photons are absorbed per square nanometer! The Poisson statistics of shot noise entails 2sigma of 100%! This will be detrimental for edge roughness (Figure 1). The edge stochastics becomes prohibitive for High-NA EUV.

Figure 1. Stochastic EUV photon absorption (6mJ/cm2 averaged over 80 nm x 40 nm). The illumination in the High-NA EUV system is shown on the left.

In fact, even with an 0.33 NA EUV system for 44 nm center-to-center vias or smaller, stochastic edge placement error (EPE) is projected to cause violation of a 5 nm total EPE spec [4]. Therefore, 2nm via patterning is expected to involve serious multipatterning, regardless of whether EUV is used or not.

Figure 2 shows a representative layout for via connections to gates and diffusion areas, where the minimum center-to-center via spacing is 40 nm, and the track pitch is 20 nm. Due to the 40 nm minimum center-to-center spacing constraint, two masks are needed for the gate contacts, while four masks are needed for the source-drain contacts. Note, this number is the same for ArF immersion or EUV. Two vias separated by 40 nm will require two separate masks whether using DUV or EUV.

Figure 2. A representative layout for via connections to gate and diffusion areas for contacted gate pitch of 40 nm and track pitch of 20 nm. Different number labels on the vias indicate different masks used for that layer.

For the BEOL, we expect fully self-aligned vias to be used [5]. We can also expect vias to be placed on a crossed diagonal grid [6]. When these arrangements are combined, the number of multipatterning masks can be minimized [7]. Note that multipatterning is still necessary even with EUV as the expected center-to-center distances are still too small (Figure 3). One mask would be needed to overlay a diagonal line grid that blocks half of the crosspoint locations where the two adjacent metal layers overlap (Figure 4). The pitch of this diagonal line grid will require self-aligned quadruple patterning (SAQP) with ArF immersion lithography, and the stochastic defect density cliff below 36 nm pitch [8,9] would also likely force EUV to use self-aligned double patterning (SADP). From the remaining locations, two ArF immersion masks or one EUV mask would be used to select which ones would be kept for forming vias (Figure 5). Note that this would leave a diagonal line signature on the vias (Figure 6).

Figure 3. The center-to-center distances above are too small to allow single exposure patterning.

Figure 4. Diagonal line block mask blocks half of the crosspoint locations where the two adjacent metal layers overlap. The blocked overlap locations are in red.

Figure 5. After the diagonal line block grid is in place, a keep mask or masks would select which metal layer locations would be kept for forming vias.

Figure 6. A diagonal line signature is left on the formed vias.

If we note the possible center-to-center distances on the crossed diagonal grid, we can see that relying on the basic repeated litho-etch (LE) approach to multipatterning will lead to up to three masks for EUV, and up to four masks for DUV, making use of an underlying 80 nm x 80 nm pitch tiling (Figure 7). The DUV LE4 approach would definitely be more cost-efficient than EUV LE3 [10]. Hence, any approach to make the requisite multipatterning more efficient, such as the diagonal line grid approach above or even directed self-assembly (DSA) [11], would help ensure getting to 20 nm track pitch and 40 nm contacted gate pitch.

Figure 7. Brute force repeated LE approach could require up to three EUV masks or four DUV masks.

References

[1] I. Cuttress (TechTechPotato), How We Get Down to 0.2nm CPUs and GPUs.

[2] A. Burov, A. V. Pret, R. Gronheid, “Depth of focus in high-NA EUV lithography: a simulation study,” Proc. SPIE 12293, 122930V (2022).

[3] F. Chen, High-NA Hard Sell: EUV Multipatterning Practices Revealed, Depth of Focus Not Mentioned.

[4] W. Gao et al., “Simulation investigation of enabling technologies for EUV single exposure of via patterns in 3nm logic technology,” Proc. SPIE 11323, 113231L (2020).

[5] V. Vashishtha, L. T. Clark, “ASAP5: A predictive PDK for the 5 nm node,” Microel. J. 126, 105481 (2022).

[6] Y-C. Hsiao, W. M. Chan, K-H. Hsieh,US Patent 9530727, assigned to TSMC; S-W. Peng, C-M. Hsiao, C-H. Chang, J-T. Tzeng, US Patent Application US20230387002; F. Chen, Routing and Patterning Simplification with a Diagonal Via Grid.

[7] F. Chen, Multipatterning Reduction with Gridded Cuts and Vias; F. Chen, Exploring Grid-Assisted Multipatterning Scenarios for 10A-14A Nodes.

[8] Y-P. Tsai et al., “Study of EUV stochastic defect on wafer yield,” Proc. SPIE 12954, 1295404 (2024).

[9] Y. Li, Q. Wu, Y. Zhao, “A Simulation Study for Typical Design Rule Patterns and Stochastic Printing Failures in a 5 nm Logic Process with EUV Lithography,” CSTIC 2020.

[10] L-Å. Ragnarsson et al., “The Environmental Impact of CMOS Logic Technologies,” EDTM 2022; E. Vidal-Russell, “Curvilinear masks extend lithography options for advanced node memory roadmaps,” J. Micro/Nanopattern. Mater. Metrol. 23, 041504 (2024).

[11] Z. Wu et al., “Quadruple-hole multiplication by directed self-assembly of block copolymer,” Proc. SPIE 13423, 134231O (2024).


CEO Interview with Jiadi Zhu of CDimension 

CEO Interview with Jiadi Zhu of CDimension 
by Daniel Nenni on 09-28-2025 at 8:00 am

Jiadi headshot

Jiadi Zhu is the CEO and founder of CDimension, a company rethinking chip design to shape the next generation of computing. Under his leadership, CDimension is creating the next generation of building blocks for chips, starting with materials and scaling up to full systems that can power everything from today’s AI and advanced computing to the quantum breakthroughs of tomorrow.

With a Ph.D. in Electrical Engineering from MIT and over a decade of work at the frontier of 2D materials and 3D integration, Jiadi has been widely recognized for his originality in device design and novel semiconductor materials. His research has been published in top journals like Nature Nanotechnology and presented at leading conferences including IEEE’s International Electron Devices Meeting.

Tell us about your company?

CDimension is pioneering a bottom-up strategy for the future of computing, beginning at the foundation of the chip stack: materials. We develop and supply next-generation two-dimensional (2D) semiconductors and insulators, enabling breakthroughs in both semiconductor circuits and quantum computing.

On the semiconductor side, our atomically thin 2D semiconductors significantly reduce power consumption and help overcome the global power wall in computing. On the quantum side, our single-crystalline 2D materials reduce noise at the source,enabling longer coherence times, higher fidelities, and scalable integration of quantum qubits.

Backed by more than 20 patents and a growing base of industrial and academic customers, our roadmap delivers wafer-scale materials today and pilot-scale quantum chips within 18 months. With this foundation, CDimension is positioned as the chip provider of the quantum era, building the material and device layer that will power tomorrow’s computing revolution.

What problem is CDimension solving?

The core issue we are addressing is that silicon has hit its limits. Put simply, chips are getting more power-hungry and harder to integrate. Most of the industry’s fixes we’ve seen up to now are top-down, things like reconfiguring GPUs; but those only deliver incremental gains, not the 100x or even 1000× leap future applications in AI and quantum will require over the next decade.

That’s why we believe the solution has to start at the bottom, with the materials themselves. For decades, 2D materials were stuck in the lab as tiny flakes with high defects and no scalability. We’ve broken that barrier by producing wafer-scale, high-quality semiconductors and insulators that customers can actually build on today.

I like to think of it this way: trying to build the future of chips with silicon alone is like trying to make a book out of bricks. It worked for a long time, but now the bricks are too bulky to stack any further. Our 2D materials are like pages: ultra-thin, smooth, and stackable. From there, you can build the books, or integrated circuits, and eventually even the writing inside, which is applications like quantum computing.

What application areas are your strongest?

Currently, our technology is making the biggest difference in quantum. While the field is gaining momentum, progress has been stalled by one critical bottleneck: noise. These random disturbances cause errors that prevent systems from scaling. Existing approaches struggle to fix it because they rely on imperfect materials or lack robustness at the materials level.

Our wafer-scale, single-crystalline 2D insulators reduce noise at the source. Instead of trying to scale by simply adding more qubits, we’re improving each individual qubit by lowering noise and extending coherence time. That allows more qubits to be connected reliably and being more robust to the surrounding environment, which is the critical step toward practical, large-scale quantum computing.

What keeps your customers up at night?

What keeps them up at night is the realization that the conventional approaches, such as silicon circuits and oxide dielectrics, can’t solve these problems on their own. Even the best design tweaks or architectural optimizations only squeeze out marginal gains, while demand from AI, data centers, and quantum continues to grow exponentially. Without a new foundation, they risk hitting hard limits: power budgets that data centers can’t sustain and quantum devices that can’t scale beyond prototypes.

That’s why CDimension resonates with them. We deliver wafer-scale, defect-free 2D materials that deliver a fundamental shift. For semiconductors, that means atomically thin transistors that slash power use. For quantum, it means ultra-smooth, single-crystalline insulators that suppress noise at the source. Together, these advances open the path from today’s bottlenecks to tomorrow’s scalable, high-performance systems.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is crowded at the system and algorithm layers of computing- everyone from hyperscalers to startups is working on GPUs, AI accelerators, or new quantum modalities. But very few are tackling the materials bottleneck head-on, because 2D materials were long assumed to be stuck in the lab: fragile, defect-prone flakes with no path to scale.

CDimension is different. We’ve broken through that barrier, delivering wafer-scale, defect-free semiconductors and insulators, protected by more than 20 patents and developed by a team that combines world-class materials science with deep semiconductor engineering. This foundation lets us uniquely address the toughest pain points in the industry- power, noise, and robustness- at their source.

In short, while others focus on optimizing the top of the stack, CDimension is building the bottom layer, the material and device foundation that makes those optimizations truly scalable.

What new features/technology are you working on?

Our primary focus right now is quantum hardware. We’ve begun supplying single-crystalline 2D insulators that directly reduce noise in quantum devices, a critical step toward scalable, error-corrected quantum systems. These ultra-smooth interfaces extend coherence, improve fidelities, and enable more robust coupling, pushing quantum closer to real-world integration. Looking ahead, our goal is hybrid semiconductor–quantum systems within 2–3 years, with scalable quantum hardware arriving much sooner than most expect.

In parallel, we’re expanding a full suite of 2D materials-  semiconductors, insulators, and conductors- all designed to integrate seamlessly with existing silicon workflows. Our first commercial release, announced recently, was ultra-thin MoS₂ grown directly on silicon wafers using a proprietary low-temperature process. These monolayers demonstrated up to 1,000× improvement in transistor-level energy efficiency compared to silicon and are already being sampled by customers across academia and industry. From MoS₂ to n-type, p-type, metallic, and insulating films at wafer scale, our platform is building the materials backbone for vertically integrated chips that unify compute, memory, and power in a single architecture.

How do customers normally engage with your company?

Today, customers are already purchasing our 2D materials for their own device research. Universities like Carnegie Mellon, Duke, and UC San Diego are currently using our wafers to bypass material synthesis and focus directly on building and testing new devices. Industry R&D teams are doing the same, evaluating our wafers for integration into semiconductor and quantum workflows.

But our offering is not just about supplying wafers; it’s about building the foundation for the next era of computing. As customers themselves scale, we aim to be a true technical partner, providing equipment, supporting integration, and eventually working together on full system solutions.

Also Read:

CEO Interview with Howard Pakosh of TekStart

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng

Podcast EP308: How Clockwork Optimizes AI Clusters with Dan Zheng
by Daniel Nenni on 09-26-2025 at 10:00 am

Daniel is joined by Dan Zheng, VP of Partnerships and Operations at Clockwork. Dan was the General Manager for Product and Partnerships at Urban Engines which was acquired by Google in 2016. He has also held roles at Stanford University and Google.

Dan explores the challenges of operating massive AI hardware infrastructure at scale with Daniel. It turns out it can be quite difficult to operate modern GPU clusters efficiently. Communication bottlenecks within clusters and between clusters, stalled pipelines, network issues, and memory issues can all contribute to the problem. Debugging these issues can be difficult and Dan explains that re-starts from prior checkpoints can happen many times in large AI clusters and each of these events can waste many thousands of GPU hours.

Dan also describes Clockwork’s FleetIQ Platform and how this technology addresses these situations by providing nano-second accurate visibility correlated across the stack. The result is more efficient and productive AI clusters allowing far more work to be accomplished. This provides more AI capability with the same hardware, essentially democratizing access to AI.

Contact Clockwork

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Howard Pakosh of TekStart

CEO Interview with Howard Pakosh of TekStart
by Daniel Nenni on 09-26-2025 at 6:00 am

!HP biopic 2

Howard Pakosh is a serial entrepreneur and angel investor. Mr. Pakosh is also Founder & CEO of the TekStart Group, a Toronto-based boutique incubator focusing on Fractional-C business development support, as well as developing, promoting and licensing technology into markets such as blockchain, Internet-of-Things (IoT) and Semiconductor Intellectual Property (SIP). TekStart is currently an early-stage investor with Piera Systems (CleanTech), Acrylic Robotics (Robotics), Low Power Futures  (Semiconductors), ChipStart (Semiconductors), and Freedom Laser (Digital Health).

Mr. Pakosh has been involved in all phases of semiconductor development for over 30 years and was instrumental in the delivery of the first commercially available USB Subsystem at his first IP startup called Xentec and Elliptic Technologies Inc. (both sold to Synopsys Inc.). Other ventures he’s recently led, include the development of Micron’s Hybrid Memory Cube controller and the development of the most power-efficient crypto processing ASIC’s for SHA-256 and SCRYPT algorithms.

Tell us about your company.

When I started TekStart® in 1998, the mission was clear: give bold ideas the resources and leadership they need to become thriving businesses. The semiconductor field has always been high-stakes, demanding both creativity and flawless execution. Over time, TekStart has shifted from a commercialization partner to a true venture builder, now concentrating heavily on semiconductors and AI. Our purpose hasn’t changed. We exist to help innovators succeed. What has changed are our methods, which have adapted to an industry that’s become more global, more competitive, and far more complex.

What new features or technology are you working on?

What I am excited to share is an exciting breakthrough we have achieved through our ChipStart® business unit. With Newport by ChipStart, we’ve proven we’re not only enabling innovation but driving it ourselves. Achieving up to 65 TOPS of performance at under 2 watts is a leap forward, unlocking a new level of performance-per-watt that opens doors to applications once thought impossible.

What problems are you solving?

The semiconductor industry faces three defining challenges: fragile supply chains, the demand for radical energy efficiency, and the relentless race to market. Newport by ChipStart is built to meet these challenges head-on. Instead of designs tied to exotic nodes, we enable resilient architectures that keep innovation moving, even in uncertain times. Instead of incremental power gains, we push for performance that redefines efficiency by delivering more capability per watt. Instead of waiting on the pace of new fabs, we help innovators leap from concept to production silicon faster than ever. Newport isn’t just solving today’s problems. It’s shaping the future of how chips get built.

What application areas are your strongest?

We see the greatest impact in Edge devices that demand real-time intelligence without relying on the cloud. Security and surveillance systems, for example, need to analyze video on-site to detect threats instantly, without the latency of sending data off-premise. In agriculture, sensors and vision systems powered by AI can monitor crops, optimize water use, and detect early signs of disease, helping farmers boost yields sustainably. AR/VR wearables require high-performance AI that runs efficiently in small, battery-constrained form factors, enabling immersive experiences without bulky hardware. And in industrial automation, factories are increasingly reliant upon AI-driven systems to inspect products, predict equipment failures, and streamline processes. These are just a few of the areas where Edge AI is not just useful but transformative, and where Newport by ChipStart is purpose-built to deliver.

What keeps your customers up at night?

The pace of innovation in semiconductors and AI has never been faster, and it’s only accelerating. Our customers worry about launching a product only to find it outdated months later. Staying relevant requires moving from concept to market at unprecedented speed – and doing so without compromising quality or performance. That’s where TekStart, through Newport by ChipStart, makes a real difference. We partner closely with innovators to compress development cycles and deliver silicon that keeps pace with today’s AI-driven world. By helping our partners beat obsolescence, we ensure they stay ahead in markets where timing is everything.

What does the competitive landscape look like and how do you differentiate?

Competition in our space revolves around two unforgiving dimensions: time-to-market and innovation. Both demand relentless execution to stay ahead. We differentiate by combining deep semiconductor expertise with an ecosystem of partners who bring complementary strengths in design, manufacturing, and deployment. Our team has decades of hands-on experience across ASIC design, operations, and AI applications. When combined with our extended network, we’re able to anticipate shifts in technology and deliver solutions that arrive ahead of the curve. This balance of speed and foresight is what keeps our customers competitive and what sets us apart in a crowded landscape.

How do customers normally engage with your company?

We typically engage through close collaboration across the semiconductor supply chain. That means working side-by-side with fab houses, manufacturers, and technology partners to ensure our products integrate seamlessly into their final deliverables. By embedding our solutions at the heart of their systems – whether it’s in smart cameras, connected devices, or industrial machinery – we help our partners to accelerate their own roadmaps. These collaborations go beyond transactions. They’re strategic partnerships designed to align our innovation with their market needs.

Also Read:

TekStart Group Joins Canada’s Semiconductor Council

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


SkyWater Technology Update 2025

SkyWater Technology Update 2025
by Daniel Nenni on 09-25-2025 at 10:00 am

Skywater Technologies HQ

SkyWater Technology, a U.S. based pure-play semiconductor foundry, has made significant strides in 2025 reinforcing its position as a leader in domestic semiconductor manufacturing. Headquartered in Bloomington, Minnesota, SkyWater specializes in advanced innovation engineering and high volume manufacturing of differentiated integrated circuits. The company’s Technology as a Service model streamlines development and production, serving diverse markets including aerospace, defense, automotive, biomedical, industrial, and quantum computing.

A major milestone in 2025 was SkyWater’s acquisition of Infineon Technologies’ 200 mm semiconductor fab in Austin, Texas (Fab 25), completed on June 30. This acquisition added approximately 400,000 wafer starts per year, significantly boosting SkyWater’s capacity. Fab 25 enhances the company’s ability to produce foundational chips for embedded processors, memory, mixed-signal, RF, and power applications. By converting this facility into an open-access foundry SkyWater strengthens U.S. semiconductor independence aligning with national security and reshoring trends. The acquisition, funded through a $350 million senior secured revolving credit facility, also added about 1,000 employees to SkyWater’s workforce, bringing the total to approximately 1,700.

On July 29, SkyWater announced a license agreement with Infineon Technologies, granting access to a robust library of silicon-proven mixed-signal design IP. Originally developed by Cypress Semiconductor, this IP is validated for high-volume automotive-grade applications and is integrated into SkyWater’s S130 platform. The portfolio includes ADCs, DACs, power management, timing, and communications modules, enabling customers to design high-reliability mixed-signal System-on-Chips within a secure U.S. supply chain. This move positions SkyWater as a trusted partner for both commercial and defense markets thus reducing design risk and accelerating time to market.

SkyWater’s financial performance in 2025 reflects steady progress. The company reported second-quarter results at the upper end of expectations, with a trailing 12-month revenue of $290 million as of June 30. However, its Advanced Technology Services segment faced near-term softening due to federal budget delays impacting Department of Defense funding. Despite this, SkyWater remains confident in achieving record ATS revenue in 2025 provided funding issues are resolved. The company’s stock price stands at around $10.56 with a market capitalization of $555 million and 48.2 million shares outstanding.

Strategically, SkyWater is capitalizing on emerging technologies. Its collaboration with PsiQuantum to develop silicon photonic chips for utility-scale quantum computing highlights its expertise in cutting-edge applications. Additionally, SkyWater adopted YES RapidCure systems for its M-Series fan-out wafer level packaging (FOWLP) in partnership with Deca Technologies, enhancing prototyping speed and reliability for advanced packaging. These initiatives align with SkyWater’s focus on high-margin, innovative solutions, positioning it as a strategic partner in quantum computing and photonics.

SkyWater’s commitment to U.S.-based manufacturing and its DMEA-accredited Category 1A Trusted Foundry status underscore its role in supporting critical domestic markets. The company’s facilities are certified for aerospace (AS9100), medical (ISO13485), automotive (IATF16949), and environmental (ISO14001) standards, ensuring high-quality production. Despite challenges like funding delays and integration risks from the Fab 25 acquisition, SkyWater’s focus on innovation, strategic partnerships, and capacity expansion positions it for long-term growth. Analysts view SkyWater as a strong investment, with a 24-36 month price target of $20, reflecting confidence in its de-risked business model and alignment with U.S. reshoring trends.

Also Read:

Podcast EP307: An Overview of SkyWater Technology and its Goals with Ross Miller

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging
by Daniel Nenni on 09-25-2025 at 8:00 am

TSMC OIP 2025

In his keynote at the TSMC OIP Ecosystem Forum Dr. LC Lu, TSMC Senior Fellow and Vice President, Research & Development / Design & Technology Platform, highlighted the exponential rise in power demand driven by AI proliferation. AI is embedding itself everywhere, from hyperscale data centers to edge devices, fueling new applications in daily life.

Evolving models, including embodied AI, chain-of-thought reasoning, and agentic systems, demand larger datasets, more complex computations, and extended processing times. This surge has led to AI accelerators consuming 3x more power per package in five years, with deployments scaling 8x in three years, making energy efficiency paramount for sustainable AI growth.

TSMC’s strategy focuses on advanced logic and 3D packaging innovations, coupled with ecosystem collaborations, to tackle this challenge. Starting with logic scaling, TSMC’s roadmap is robust: N2 will enter volume production in the second half of 2025, N2P slated for next year, A16 with backside power delivery by late 2026, and A14 progressing smoothly.

Enhancements to N3 and N5 continue to add value. From N7 to A14, speed at iso-power rises 1.8x, while power efficiency improves 4.2x, with each node offering about 30% power reduction over its predecessor. A16’s backside power targets AI and HPC chips with dense networks, yielding 8-10% speed gains or 15-20% power savings versus N2P.

N2 Nanoflex DTCO optimizes designs for dual high-speed and low-power cells, achieving 15% speed boosts or 25-30% power reductions. Foundation IP innovations further enhance efficiency. Optimized transmission gate flip-flops cut power by 10% with minimal speed (2%) and area (6%) trade-offs, sometimes outperforming state gate variants.

Dual-rail SRAM with turbo/nominal modes delivers 10% higher efficiency and 150mV lower Vmin, with area penalties optimized away. Compute-In-Memory stands out: TSMC’s digital CIM based Deep Learning Accelerator offers 4.5x TOPS/W and 7.8x TOPS/mm² over traditional 4nm DLAs, scaling from 22nm to 3nm and beyond. TSMC invites partnerships for further CIM advancements.

AI-driven design tools amplify these gains. Synopsys’ DSO.AI is the leader with reinforcement learning for PPA optimization, improving power efficiency by 5% in APR flows and 2% in metal stacks, totaling 7%. For analog designs integrations with TSMC APIs yield 20% efficiency boosts and denser layouts. AI assistants accelerate analysis 5-10x via natural language queries for power distribution insights.

Shifting to 3D packaging, TSMC’s 3D Fabric includes SoIC for silicon stacking, InFO for mobile/HPC chiplets, CoWoS for logic-HBM integration, and SoW for wafer-scale AI systems. Energy-efficient communication sees 2.5D CoWoS improving 1.6x with microbump pitches from 45µm to 25µm. 3D SoIC boosts efficiency 6.7x over 2.5D, though with smaller integration areas (1x reticle vs. 9.5x). Die-to-die IPs, aligned with UCIE standards, are available from partners like AlphaWave and Synopsys.

HBM integration advances: HBM4 on TSMC’s N12 logic base die provides 1.5x bandwidth and efficiency over HBM3e DRAM dies. N3P custom bases reduce voltage from 1.1V to 0.75V. Silicon photonics via co-packaged optics offers 5-10x efficiency, 10-20x lower latency, and compact forms versus pluggables. AI optimizations from Synopsys/ANSYS enhance this by 1.2x through co-design.

Decoupling capacitance innovations using Ultra High-Performance Metal-Insulator-Metal plus Embedded Deep Trench Capacitor enables 1.5x power density without integrity loss, modeled by Synopsys/ANSYS tools. EDA-AI automates EDTC insertion (10x productivity) and substrate routing (100x, with optimal signal integrity).

Bottom line: Moore’s Law is alive and well. Logic scaling delivers 4.2x efficiency from N7 to A14, CIM adds 4.5x IP/design innovations contribute 7-20%. Packaging yields 6.7x from 2.5D to 3D, 5-10x from photonics, and 1.5-2x from HBM/ Decoupling Capacitor advances, with AI boosting productivity 10-100x.

TSMC honored partners with the 2025 OIP Awards for contributions in A14/A16 infrastructure, multi-die solutions, AI design, RF migration, IP, 3D Fabric, and cloud services. It is all about the ecosystem, absolutely.

Exponential AI power needs demand such innovations. TSMC’s collaborations drive 5-10x gains fostering efficient, productive AI ecosystems. Looking ahead, deeper partnerships will unlock even more iterations for sustainable AI advancement.

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion


Semiconductor Equipment Spending Healthy

Semiconductor Equipment Spending Healthy
by Bill Jewell on 09-24-2025 at 4:00 pm

Semiconductor Equipment Spend 2H 2025

Global spending on semiconductor manufacturing equipment totaled $33.07 billion in the 2nd quarter of 2025, according to SEMI and SEAJ. 2Q 2025 spending was up 23% from 2Q 2024. China had the largest spending at $11.36 billion, 34% of the total. However, China spending in 2Q 2025 was down 7% from 2Q 2024. Taiwan had the second largest amount and experienced the fastest growth, with 2Q 2025 spending $8.77 billion, up 125% from 2Q 2024. TSMC was the major driver of the increase in Taiwan, with its capital expenditures (CapEx) up 62% in the first half of 2025 versus the first half of 2024. South Korea spending was the third largest at $5.91 billion, up 31% from a year earlier.

North America showed the fastest growth in semiconductor equipment spending in 2024, with 4Q 2024 spending of $4.98 billion up 163% from $1.89 billion in 1Q 2024. However, North America spending in 1Q 2025 was $2.93 billion, down 41% from 4Q 2024. 2Q 2025 spending was down again at $2.76 billion. The spending drop can be attributed to delays in planned wafer fabs in the U.S. Intel has delayed completion of its wafer fab in New Albany, Ohio, until 2031 from its initial plan of 2025. Groundbreaking on Micron Technology’s wafer fab in Clay, New York, has been delayed until late 2025 from its original target of June 2024. Samsung reportedly delayed initial production at its new wafer fab in Taylor, Texas, to 2027 from an original goal of 2024.

Semiconductor equipment spending in Japan in 2Q 2025 was 2.68 billion, up 66% from 2Q 2024. Europe spending in 2Q 2025 was 0.72 billion, down 23% from a year earlier. Spending in the rest of the world (ROW) was 0.87 billion, down 28%.

The outlook for total semiconductor capital expenditures (CapEx) in 2025 remains essentially the same as our Semiconductor Intelligence estimates published in March 2025. We still project 2025 CapEx of $160 billion, up 3% from $155 billion in 2024. The outlook for 2026 CapEx is mixed. Intel expects CapEx to be lower in 2026 than its expected $18 billion in 2025. Micron Technology reported $13.8 billion in CapEx for its fiscal year ended in August 2025 and plans higher spending in fiscal year 2026. Texas Instruments projects 2026 CapEx of $2 billion to $5 billion compared to $5 billion in 2025. The company with the largest CapEx, TSMC projects a range from $38 billion to $42 billion in 2025. TSMC has not provided CapEx estimates for 2026, but investment bank Needham and Company predicts TSMC will increase CapEx to $45 billion in 2026 and $50 billion in 2027.

The U.S. CHIPS and Science Act was passed in 2022 to boost semiconductor manufacturing in the U.S. As reported by IEEE, most of the $30 billion proposed in the CHIPS Act was awarded in the two months after President Trump’s election in November 2024 and before his inauguration in January 2025. The Trump administration wants to revise the CHIPS Act but has not offered specific plans. In August, the U.S. government made an $8.9 billion investment in Intel for a 9.9% stake in the company. $5.7 billion of the investment came from grants approved but not yet awarded to Intel under the CHIPS Act. The remaining $3.2 billion in funding came from the Secure Enclave program which was awarded to Intel in September 2024. A contributor to Forbes questions the wisdom of the Intel investment.

U.S. Commerce Secretary Howard Lutnick is reportedly considering the U.S. government taking shares in other companies which have received money under the CHIPS Act. Thus, the Trump administration seems to be changing the terms of the CHIPS Act which was approved by Congress in 2022. Without any approval from Congress, the Trump administration is apparently taking back grant money and using it for equity investments.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Still Strong in 2025

U.S. Imports Shifting

Electronics Up, Smartphones down


Yuning Liang’s Painstaking Push to Make the RISC-V PC a Reality

Yuning Liang’s Painstaking Push to Make the RISC-V PC a Reality
by Jonah McLeod on 09-24-2025 at 10:00 am

Embedded World Germany

At Embedded World 2025 in Nuremberg, Germany, on March 11, 2025, Yuning Liang, DeepComputing Founder and CEO walked onto the stage with a mischievous smile and a challenge. “What’s the hardest product to make?” he asked rhetorically. “A laptop. It’s bloody hard… but we did it. You can swap the motherboard, you can upgrade, you can’t make excuses anymore. Use it, break it, fix it,” he exclaimed.

Looking back, it seems serendipity guided Liang’s path. He left China in the ’90s for England’s Midlands, where he studied electronics at university. A professor’s recommendation sent him to Singapore’s Nanyang Technological University on scholarship, launching his journey into computer engineering and AI research. He paused his PhD studies to join Xerox’s Asia-Pacific expansion. Five years there exposed him to proprietary systems and then to open platforms. At Nokia, he led feature phone platforms, watching the company’s fortunes collapse under Android’s rise. He then spent four years at Samsung driving open enterprise software. Huawei recruited him to optimize Android runtimes—an experience that inspired him to launch a security platform startup with several million in VC funding in 2018.

Liang never intended to be a hardware entrepreneur. “I’m a software guy. I was stupid enough to waste all my money on hardware,” he exclaimed during his Open-Source Summit Europe (OSS) presentation in Amsterdam, Wednesday August 27, 2025. His résumé backs him up: Xerox, Nokia, Samsung — all in software, from Java virtual machines (JVM) to mobile platforms. He honed his expertise putting JVMs onto PowerPC and ARM, he recalled at OSS Europe 2025. I was in charge of the Java platform for Nokia on the feature phones. We had ARM7, ARM9— not even Cortex at the time,” he intoned during his talk.

Then came the worldwide COVID lockdowns. During this time in China, RISC-V emerged as a strategic workaround for a nation seeking technological autonomy. This meant accelerating investment in homegrown RISC-V cores, toolchains, and ecosystems. The result was transformative. The architecture moved from a research curiosity into a national asset. The lockdown didn’t just expose vulnerabilities—it galvanized a shift toward open silicon, with RISC-V positioned as both a technical enabler and a geopolitical hedge. The IBM PC in the 1990s propelled The U.S. GDP growth to double for the decade. Could open-source computing based on RISC-V do the same today? That is Liang’s gamble.

After selling his previous startup and stuck at home, Liang shifted his focus to RISC-V hardware and looked for something else to do. Out of boredom and frustration, DeepComputing was born. At first it was small projects: an RC car, a drone, some smart speakers — all running on RISC-V. “I had nothing better to do but electronics,” he admitted. But those toys were a training ground. They taught him the limits of early SoCs and showed him just how much work it would take to push RISC-V toward mainstream use. As the world turned inward, so too Liang—redirecting his career toward the open hardware movement RISC-V now symbolized.

From the beginning, Liang leaned on pioneers like SiFive and Andes Technology. Their CPU cores — SiFive’s U74 (RV64GC_Zba, Zbb)—Zba Address Generation, Zbb Basic bit manipulation—and Andes’s 7nm QiLai family — gave DeepComputing the building blocks for something more ambitious than toys. “None of our SoC manufacturers knew where to go,” he quipped at Embedded World 2025. “They didn’t know what nanometer, what compute power, how many TOPS, how much DDR.”  His message to the audience: don’t wait for perfect specs — ship something.

Where others saw uncertainty, Liang saw opportunity. He wasn’t going to out-engineer Intel or ARM. But he could take existing IP and push it into places no one else dared — consumer laptops, where expectations were unforgiving and excuses ran out fast. Liang chuckled as he recalled the first RISC-V laptop, Roma: “I made 200 of them. Two hundred crazy guys paid me — $5,000 each. But I still lost money,” he declared at FOSDEM (Free and Open-Source Developers’ European Meeting) 2025—one of the world’s largest gatherings of open-source enthusiasts, developers, and technologists in Brussels. Roma was no commercial success, but it was proof. People wanted to touch RISC-V, not just read about it in white papers. They wanted to hold it, break it, fix it — exactly what Liang had promised. And it gave him credibility: he was no longer just another RISC-V enthusiast waving slides. He had hardware in the wild, and that mattered.

The Twitter Pitch

The real breakthrough came not at a conference, but on Twitter. Liang reached out cold to Framework. Nirav Patel founded San Francisco-based Framework Computer Inc., in 2020 to redefine the laptop industry by building computers users can upgrade, fix, and customize—empowering ownership rather than planned obsolescence. Its flagship product, the Framework Laptop 13, earned acclaim for its open ecosystem and DIY-friendly design, while the newer Laptop 16 expanded into high-performance territory with swappable input modules and GPU upgrades. Framework isn’t just selling laptops—it’s selling a movement. It was exactly the partner Liang needed to distribute his RISC-V laptop.

“I pinged them on Twitter. I begged them — you do the shell; I’ll do the motherboard. Why not? We are open,” he exclaimed on the FOSDEM 2025 stage. It was audacious — a scrappy RISC-V builder pitching a darling of the repair-friendly laptop scene. But it worked. Framework agreed. DeepComputing would focus on motherboards; Framework would provide the shells, distribution, and community. At RISC-V Taipei Day 2025, Liang turned this into a rallying cry: “Don’t throw away the case. Throw away the motherboard. You can throw x86 away, you can throw ARM away, change it to RISC-V. How good is that? No more excuses.

Liang described his method as a ‘Lego’ approach: modular, imperfect, iterative. “I don’t care how crap the hardware is,” he decried. “Make it into a product, open it up, give it to the open-source community. Twenty million developers will help you optimize it,” Liang exhorted at Embedded World 2025. By treating laptops like Lego kits — cases here, chiplets there, swappable boards everywhere — he created a system where failure wasn’t fatal. If a design fell short, you didn’t scrap the whole thing. You just swapped in another board.

AI the Killer App

Just as the Homebrew Computer Club gave early PC hobbyists a place to swap ideas in the 1970s, new online communities are coalescing around local AI. Reddit forums like r/LocalLLaMA and r/ollama, Discord servers for llama.cpp and Ollama, and Hugging Face discussion threads have become the meeting halls where enthusiasts trade benchmarks, quantization tricks, and new use cases. These are incubators of a culture that treats local AI inference the way early hobbyists treated microcomputers: as a frontier to be explored, shared, and expanded together.

Liang sees the same cultural energy, but knows that without upstreaming, RISC-V risks falling out of sync with every new kernel release. DeepComputing, he realized, couldn’t remain a boutique shop forever. “Once we hit 10,000 units, we break even,” he declared at OSS Europe 2025. That became his new target: scale beyond early adopters, reach students, reach universities. He launched a sponsorship program—free IP, free SoCs, free boards—for schools willing to teach RISC-V. “Help me move bricks,” he implored. “Otherwise, we’re all dead.”

Scaling the Software Cliff

By 2025, DeepComputing was rolling out boards with four, eight, even 32 cores—some targeting up to 50 TOPS of AI acceleration. Hardware was moving fast. But at OSS Europe 2025, Liang admitted the real bottleneck wasn’t silicon. “It’s not a hardware issue. It’s a software issue. Even Chromium doesn’t have a RISC-V build on their Continuous Integration (CI). Without upstream, who’s going to maintain it?” he asked.

Chromium became his case in point. Beneath its familiar interface lies a labyrinth of dependencies: hundreds of handwritten assembly libraries tuned for x86 and ARM, and a build system that challenges even seasoned developers. For most users, this complexity is invisible. But for anyone bringing up a new instruction set like RISC-V, Chromium is a gatekeeper. Without native support, a RISC-V laptop can’t run a modern browser—no tabs, no JavaScript, no YouTube, no GitHub. What looks like a coding detail becomes a usability cliff.

That’s why “Chromium out of the box” isn’t a luxury—it’s a litmus test. To move beyond dev boards into mainstream PCs, RISC-V must pass through the crucible of Chromium. And that means more than just compiling the browser: it means pulling in optimized libraries, build scripts, and platform assumptions. In short: no Chromium, no desktop.

Progress has been fragile but real. Greg Kroah-Hartman, the Linux maintainer, once sent Liang a video with a simple lesson: always upstream early, even from FPGA prototypes. Liang took it to heart. “Otherwise, you wait nine months, and by the time you reach market, your kernel is already out of date,” he said.

The effort to bring Chromium and CI to RISC-V is no longer theoretical—it’s underway, though unfinished. Community and vendor teams, including Alibaba’s Xuantie group, now have Chromium builds running on RISC-V hardware, with active work to optimize the V8 engine and UI responsiveness. Developers can already cross-compile, and repositories host functional ports. But upstream integration is still marked “In Progress” in Chromium’s tracker. That leaves RISC-V vulnerable to regressions, and performance still lags behind x86 and ARM—with slow rendering and video stutter.

This is where RISE, the industry-backed non-profit, comes in. Supported by Google, Intel, NVIDIA, Qualcomm, and others, RISE’s mandate is to make sure RISC-V isn’t treated as an afterthought in critical software stacks. By funding CI integration and pushing upstream support for projects like Chromium, Linux, and LLVM, RISE is trying to turn Liang’s fragile progress into something permanent. The vision is simple: once RISC-V is tested every day in the same CI loops as x86 and ARM, it stops being a science project and starts being “just another architecture” — ready for real desktops.

Through it all, Framework has been the constant—the partner that turned Liang’s persistence into something more than a COVID-era project. “We’ve been open, we’ve been slow, but we keep going. Hope lets us work harder,” Liang said. For the RISC-V movement that has spent fifteen years chasing its breakthrough, he offers something rare: momentum.

Also Read:

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

Beyond Von Neumann: Toward a Unified Deterministic Architecture

Beyond Traditional OOO: A Time-Based, Slice-Based Approach to High-Performance RISC-V CPUs

Basilisk at Hot Chips 2025 Presented Ominous Challenge to IP/EDA Status Quo


Arm Lumex Pushes Further into Standalone GenAI on Mobile

Arm Lumex Pushes Further into Standalone GenAI on Mobile
by Bernard Murphy on 09-24-2025 at 6:00 am

chatbot on Lumex min

When I first heard about GenAI on mobile platforms – from Arm, Qualcomm and others – I confess I was skeptical. Surely there wouldn’t be enough capacity or performance to deliver more than a proof of concept? But Arm, and I’m sure others, have been working hard to demonstrate this is more than a party trick. It doesn’t hurt that foundation models have also been slimming down to a few billion parameters so that now it looks very practical to host meaningful chatbots and even agentic AI on a phone, running standalone on the phone without need for cloud access. Arm have announced their new Lumex platform in support of this trend which may turn me into a believer. What I find striking is that GenAI is hosted on the CPU cluster with no need for GPU or NPU support.

Why should we care?

The original theory on mobile and AI was that the mobile device would package up a request, ship it to the cloud, the cloud would do the AI heavy lifting and then ship the response back to the mobile device. That theory fell apart for a litany of reasons. Acceptable performance depends on reliable and robust wireless connections, not always certain especially when traveling. Shipping data back and forth introduces potential security risks and certainly privacy concerns. The inherent latency in connections with the cloud makes real-time interaction impractical, undermining many potentially appealing use cases like chatbot apps. Some mobile apps must support quick on-device learning to refine behavior to user preferences. Finally, neither mobile app developers nor their users want to add a cloud subscription on top of their app subscription.

There may still be cases where cloud-based AI will be a useful complement to mobile, but the general mood now leans to optimizing the on-device experience as much as possible.

Arm Lumex, a new generation platform for on-device AI

All good reasons to make AI native on the phone, but how can this be effective? Arm has gone all-in to make the experience real with their newly announced Lumex platform, emphasizing the CPU cluster as the centerpiece of AI acceleration. I’ll come back to that.

Briefly, Lumex introduces new CPU cores (branded C1-Ultra, C1-Premium and C1-Pro) and a GPU core (branded G1-Ultra), with the expected performance advances on a new release, together with a CSS philosophy of complete subsystems extending to chiplets, 3nm-ready, all supported by a software stack and their ecosystem to support fast time to market deployment.

It’s the CPU cores that particularly interest me. Arm is boasting these systems can run meaningful GenAI apps without needing to share the load with the Mali GPU or an NPU. They accomplish this with SME, their scalable matrix extension, now adding a new generation in SME2. This claim is backed up by endorsements from the Android development group, the AI partnerships group at Meta and the client engineering group at AliPay.

Benchmarking shows nearly 5X improvement in latency in speech recognition, nearly 5X encode rate for Gemma (same family as Google Gemini) and nearly 3X faster generation time for Stable Audio (from the same people who brought you image generation).

Why not add further acceleration by folding in GPU and NPUs? Geraint North (Fellow, AI and Developer Platforms at Arm) made some interesting points here. GPU and NPU cores may be faster standalone at handling some aspects of a model, but only for data types and operations within their scope. CPUs on the other hand can handle anything. Another downside to a mixed engine solution is that moving data between engines (e.g. CPU/GPU) incurs overhead no matter how well you optimize, whereas a CPU cluster is already highly optimized for minimal latency.

The final nail in the mixed engine coffin is in aligning with what millions of app developers want. They start their work on CPU, naturally designing and optimizing to that target. Adding in considerations for GPU and NPU accelerator cores is pretty alien to how they think. For maximum business opportunity they also need to support a wide range of phones, some of which may have GPU/NPU cores, and some may not. An implementation based on purely on the CPU cluster keep their plans simple since the CPUs can handle all data types and operations. Kleidi-based libraries simplify development further by making use of SME/SME2 acceleration transparent.

Maybe a highly targeted implementation for one platform could get higher AI performance using the GPU but it wouldn’t be this scalable. Or developer friendly. Lumex offers a simpler development and deployment use-model: GenAI workloads on-device across many phone types without needing to go to the cloud. Very interesting.