Bronco Webinar 800x100 1

Gate-All-Around (GAA) Technology for Sustainable AI

Gate-All-Around (GAA) Technology for Sustainable AI
by Daniel Nenni on 02-09-2026 at 8:00 am

Gate All Around Technology for Sustainable AI SemiWiki
IEDM 2025 GAA FinFET Paper

The transition from FinFET to Gate-All-Around (GAA) transistor technology represents a pivotal moment in the evolution of logic devices, driven by both physical scaling limits and the rapidly growing computational demands of artificial intelligence. As semiconductor technology approaches the sub-3 nm regime, traditional FinFET architectures face fundamental challenges in electrostatic control, performance scalability, and power efficiency. GAA transistors have emerged not as an optional enhancement, but as an essential foundation for sustaining future logic architectures and enabling sustainable AI computing, absolutely.

FinFETs were introduced to overcome the shortcomings of planar MOSFETs by improving effective channel width and electrostatic integrity through a three-dimensional fin structure. However, aggressive scaling has exposed inherent limitations in this approach. As fin widths shrink, parasitic resistance increases and carrier mobility degrades due to quantum confinement and surface roughness scattering. Additionally, the three-sided gate configuration of FinFETs struggles to adequately suppress short-channel effects at advanced nodes. These issues collectively constrain further scaling, particularly when standard cell heights approach the ~140–160 nm range.

GAA transistors fundamentally address these challenges by fully surrounding the channel with gate material, delivering superior electrostatic control. This all-around gating enables steeper subthreshold slopes, reduced leakage current, and improved short-channel behavior, even at extremely small dimensions. As a result, GAA architectures re-enable cell height scaling to approximately 100 nm and below, restoring performance and power scaling that had plateaued during the later FinFET generations. This capability has driven industry-wide adoption of GAA as the successor to FinFET technology beyond the 3 nm node.

Beyond electrostatics, GAA devices introduce unprecedented design flexibility. Unlike FinFETs, which rely on discrete fin counts to adjust effective width, GAA transistors allow continuous tuning of nanosheet width. This feature enables designers to finely balance speed and power consumption within the same process node. Wider nanosheets support high-performance computing and server applications, while narrower nanosheets reduce power consumption for mobile and AI-centric workloads. This broad coverage across the speed–power spectrum makes GAA particularly well suited for heterogeneous systems and application-specific optimization.

The rise of generative AI and large language models has further amplified the importance of GAA technology. AI workloads demand exponential growth in compute capability, pushing systems toward petascale and exascale performance levels. At the same time, global data center power consumption is projected to increase dramatically, raising concerns about energy sustainability. GAA transistors directly address this tension by enabling higher performance per watt and supporting increased SRAM density. Higher on-chip memory density reduces off-chip data movement, improving data locality and lowering energy costs associated with high-speed interconnects.

Crucially, GAA also serves as a structural platform for future transistor innovations. Architectures such as Forksheet FETs and three-dimensional stacked FETs (3DSFETs) build directly upon the GAA concept, preserving its electrostatic advantages while enabling further area scaling. When combined with backside power delivery networks, these advanced structures offer improved routing efficiency, reduced voltage drop, and enhanced overall performance. Together, these innovations position GAA as the cornerstone of “beyond-GAA” logic technologies.

Bottom line: The transition to Gate-All-Around technology marks a foundational shift in semiconductor design. GAA is not merely a replacement for FinFETs, but a scalable, flexible, and energy-efficient platform capable of supporting future logic architectures and sustainable AI growth. As device scaling continues to confront physical and power-related constraints, the successful implementation of GAA will determine the industry’s ability to meet the performance, efficiency, and societal demands of next-generation computing.

Also Read:

The 71st International Electron Devices Meeting (IEDM 2025)

DAC – The Chips to Systems Conference 2026

Verification Futures with Bronco AI Agents for DV Debug


VSORA Board Chair Sandra Rivera on Solutions for AI Inference and LLM Processing

VSORA Board Chair Sandra Rivera on Solutions for AI Inference and LLM Processing
by Lauro Rizzatti on 02-09-2026 at 6:00 am

Sandra Rivera VISORA

Sandra Rivera, a Silicon Valley veteran who is the former CEO of Altera, an Intel FPGA spinout, and long-time Intel executive, recently became Chair of the Board of Directors of Paris-based VSORA. VSORA, a technology leader redefining AI inference for next-generation data centers, cloud infrastructure and edge, is focused on addressing high-performance, low-latency inference use cases, a market that is expected to grow to $250B by 2029.

I recently had a chance to talk with Rivera to find out more about what she sees as VSORA’s strengths and opportunities.

Rizzatti: What specific technical bottlenecks in the current AI hardware landscape convinced you that VSORA’s unique architecture provides a solution?

Rivera: One of the things we know now, in terms of the AI industry and the juncture that we’re at, is that the bottleneck for more scalable deployments is not in raw compute—it is absolutely about data movement. And if you look at the current architectures being deployed, they’ve been focused more on heavy training workloads, which GPUs are more suited.

If you look at the problem we’re trying to solve around AI inference—particularly for large models and real-time workloads—we know that performance is not constrained by compute, but by memory bandwidth, latency, and determinism,.

When we look at the problems customers are expressing to us, they want a solution that addresses high throughput, low latency, and a more deterministic set of requirements that they need to deploy in their environments. And VSORA’s architecture directly attacks this problem in terms of how data flows through the system. It minimizes off-chip memory access, maximizes effective bandwidth, and delivers much more predictable latency.

What we’re hearing from customers is that current solutions are increasingly underutilized for inference—perhaps even over-engineered for inference. They’re expensive to scale and power-hungry, whereas the VSORA architecture and product are compelling because they are built specifically to address this bottleneck: high-bandwidth, low-latency, scalable inference.

And these solutions are also quite cost-effective compared to the power-hungry GPUs that were designed to address a different part of the workflow, mainly training.

Rizzatti: Nvidia acquired Groq. Its architecture differs from that of VSORA, but at a 30,000-foot view, the problem it is addressing is similar. Do you view this acquisition as an implicit acknowledgement by Nvidia that the GPGPU is not doing the proper job for LLM processing?

Rivera: The investment by Nvidia in Groq certainly reinforces the position we’ve taken regarding the problem we’re focused on solving. It validates the thesis for the VSORA business, which is that customers are increasingly looking for solutions to address the AI inference problem.

This is about power efficiency, cost per token, and certainly much lower latency and more determinism than you’re able to get from more general purpose computing architectures.

So yes, it’s quite helpful, because the market leader is acknowledging that it is not a one-size-fits-all architecture for addressing all the different elements of an overall AI workflow. Indeed, you need heterogeneous architectures that address different areas of that AI continuum.

For us, it’s a strong validation that the thesis we had—focusing on what is probably the biggest pain point and the largest market opportunity in the coming years—is correct: low-latency, high-performance inference that is cost-effective and power-efficient. And with our innovative architecture, we address those problems head-on.

Rizzatti: In a previous chapter, the VSORA team founded, built, and successfully exited a company called DibCom, an innovator in radio decoding that developed an advanced digital signal processor. In many ways, it laid the groundwork for VSORA. Are you confident this same team can achieve a similar level of success in AI.

Rivera: One of the biggest appeals for me in joining the team—and for prospective customers considering VSORA solutions—is the fact that this is a team that has been working together and delivering products for many years.

They have had 14 successful tape-outs in their history. They have now taped out a very complex logic device on leading process technology, including advanced packaging technology and high-bandwidth memory embedded into the overall platform.

It is not an easy thing for organizations, and certainly for silicon development teams, to come together and deliver complex products. The fact that this team, with this particular leader—the CEO—has done that 14 times before, and now has done it once again in an fast-moving field like AI, really demonstrates the cohesiveness of the team and the deep experience and expertise they have in developing complex silicon products.

I think it is one of the biggest differentiators of VSORA compared with many of the new startups that don’t necessarily have a history of working together or a demonstrated track record of success.

I consider this one of the major positives that prospective customers can feel confident about when choosing VSORA solutions for their AI infrastructure.

Rizzatti: Do you agree or disagree with the “one size doesn’t fit all” narrative that a few major players in AI processing promote? How does VSORA fit into this narrative?

Rivera: As I said earlier, the industry is very much moving toward heterogeneous architectures that address the entire AI workflow.

Even if you look at the role of the CPU as the head node and orchestrator, it handles preprocessing and data cleaning before training. GPUs and highly parallel architectures used effectively for training-heavy workloads with massive frontier models.

Then you move into architectures designed for specific parts of the workflow and different applications. Some focus on media processing, some on networking throughput and others on storage acceleration.

In our case, we are focused on efficient data movement between the processing engine and external memory. We have a unique architecture that addresses the memory wall problem where compute units stall waiting for data, resulting in wasted performance and excessive power draw.  Our award-winning, patented architecture enables a very power-efficient, low-latency, and low-cost solution for the AI inference problem.

The need for heterogeneous architectures to address the varying AI workload requirements is not just my belief – you’re seeing a number of strategic partnerships, collaborations, and acquisitions in the industry to support this approach..

Lauro: 2025 was defined by the race to scale. As we look forward to 2026 and beyond, what would you predict to be the main trends?

Rivera: I think the next phase will be defined much more around efficiency, specialization, and economics—going back to having the right architecture for the right part of the workload.

All the data and analyst research show that inference is going to dominate in terms of what enterprises are looking to address for large-scale deployments.

Problems around latency, power efficiency, and deployment costs will matter much more than headline peak benchmark numbers, which is how much of the industry has evaluated solutions to date.

I think we will also see tighter coupling between software and hardware, and architectures specifically designed for inference characteristics will shine compared to solutions repurposed from training accelerators.

In that market landscape, and in meeting customer requirements, VSORA is very well positioned—not because we seek to be different, but because we are aligned with customer pain points and where the market is heading.

We believe we will be a major player in enabling AI to scale sustainably, not just because of the technical solution, but also because of the commercial attractiveness of the offer.

Contact VSORA

Also Read:

CEO Interview with Naama BAK of Understand Tech

CEO Interview with Dr. Heinz Kaiser of Schott

CEO Interview with Moshe Tanach of NeuReality


TSMC & GCU Semiconductor Training Program: Preparing Tomorrow’s Workforce

TSMC & GCU Semiconductor Training Program: Preparing Tomorrow’s Workforce
by Daniel Nenni on 02-08-2026 at 2:00 pm

TSMC GCU Semiconductor Training Program

The expansion of semiconductor manufacturing in the United States, particularly with TSMC’s multi-fab campus in Phoenix, Arizona, has created a significant need for skilled technical workers. To meet this demand, TSMC has partnered with educational institutions, including Grand Canyon University (GCU), to launch innovative training pathways aimed at preparing individuals for careers in semiconductor fabrication and operations. This partnership is part of a broader ecosystem effort involving government, workforce boards, community colleges, and universities working together to develop a sustainable talent pipeline for the semiconductor industry.

Why the Program Exists

Semiconductor manufacturing is one of the most technically demanding and high-technology sectors in the global economy. Operating advanced fabrication facilities, or “fabs”, requires talent with specialized skills in automated systems, precision processes, cleanroom operations, and semiconductor science. When TSMC announced its Arizona investment, one of the key challenges highlighted was the shortage of locally available semiconductor workforce talent with requisite technical skills. In response, the company and regional partners have collaborated on training and apprenticeship programs to build that talent ecosystem locally.

Program Structure and Partnerships

The TSMC-GCU semiconductor training program, formally known as the Manufacturing Specialist Intensive Pathway, is an industry-aligned educational pathway created to prepare participants for technical roles within semiconductor manufacturing. This initiative is part of a broader suite of workforce development efforts that also include registered apprenticeship programs, technician training with community colleges and Northern Arizona University, and other industry partnerships.

At its core, the program with Grand Canyon University focuses on equipping individuals with practical skills that map directly to Manufacturing Specialist roles at TSMC’s Phoenix fabs. The curriculum encompasses semiconductor fundamentals, wafer fabrication processes, standard operational procedures, and factory-floor workflows, all of which are foundational knowledge areas for anyone seeking to enter semiconductor manufacturing.

Program Details

Duration & Format: The program typically runs over a 15-week period, blending classroom instruction with industry-relevant learning experiences designed to mirror real semiconductor manufacturing environments.

Credentialing: Participants earn a certificate of completion from GCU, along with 16 college credit hours, and industry-recognized professional credentials from the Institute of Electrical and Electronics Engineers (IEEE), which helps validate competencies to employers.

Target Audience: The training is geared toward a wide range of learners — from high school graduates and career changers to individuals already in the workforce seeking new tech-focused opportunities.

Pathway to Employment: Successful participants gain not only educational credentials but also a competitive advantage when applying for semiconductor technician, manufacturing specialist, or related technical roles at TSMC or other semiconductor firms in Arizona.

Broader Workforce Strategy

While the GCU partnership is a key piece of the talent development puzzle, it sits within a larger regional workforce strategy. TSMC’s Registered Technician Apprenticeship program, supported by the State of Arizona, the City of Phoenix, and institutions like Estrella Mountain Community College, Northern Arizona University, and other partners, offers multi-year apprenticeship pathways in equipment, process, and facilities technician roles that combine classroom instruction with paid on-the-job training.

These programs are designed to address both entry-level and advanced technical needs. Apprentices typically work hands-on in real semiconductor environments while earning credit and experience, which can lead to stackable credentials and even associate or bachelor’s degrees when combined with college coursework.

Impact and Future Prospects

The TSMC-GCU semiconductor training program underscores the importance of public-private educational collaboration in scaling a skilled workforce fast enough to match the pace of industrial growth. By equipping participants with relevant technical knowledge and credentials recognized by both academia and industry, the program not only fills immediate labor gaps but also fosters long-term career opportunities in a high-tech sector that is becoming increasingly critical for U.S. competitiveness.

Bottom line: This initiative helps bridge the transition from education to employment in a field where the demand for skilled workers is projected to grow as semiconductor manufacturing continues to expand across the United States.

GCU and TSMC’s MSI Pathway Webinar

Also Read:

The Chronicle of TSMC CoWoS

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion


CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance
by Daniel Nenni on 02-08-2026 at 12:00 pm

Dr. Raj Gautam Dutta


Dr. Raj Gautam Dutta is the Co-Founder and Chief Executive Officer of Silicon Assurance, where he defines the company’s strategic direction and leads its technology and product vision. He is responsible for driving the development of differentiated hardware security solutions, executing growth and partnership strategies, and positioning the company within the global semiconductor and security ecosystem. With over eight years of experience in hardware security innovation, Dr. Dutta has a proven record of translating advanced research into deployable, market-ready technologies. His background in technology transfer and strategic technology analysis enables Silicon Assurance to deliver solutions that combine deep technical rigor with commercial and mission-critical relevance.

Tell us about your company

Silicon Assurance is a Security EDA company dedicated to strengthening semiconductor supply chain trust by enabling organizations to establish verifiable confidence in the security of the silicon they design, integrate, procure, and deploy. Our flagship platform, Analyz-N™, is a gate-level security assurance platform that automates the identification of security-relevant assets, analyzes post-synthesis security risk, and generates audit-ready assurance reports to prove customers’ silicon is verifiably secure, not merely assumed secure.

What problems are you solving?

Modern microelectronics underpin mission-critical cyber and national security systems, yet hardware security assurance has not kept pace with the sophistication of emerging threats. The complexity of AI accelerators, the rise of multi-chiplet architectures, and pervasive third-party IP reuse have significantly expanded the hardware attack surface and weakened traditional trust boundaries. Current practices rely heavily on RTL-focused point tools and static checklists, which fail to capture evolving hardware threats and the security implications of EDA design and verification flows that were not built with adversarial behavior in mind. As a result, vulnerabilities—such as leakage paths, privilege escalation, hardware Trojans, or countermeasures that fail to propagate through synthesis—can remain undetected until post-deployment.

Silicon Assurance is developing Analyz-NTM, an end-to-end automated gate-level security assurance EDA platform designed to prove the security of microelectronics. Our flagship platform’s capabilities include automatically identifying security-relevant assets, correlating them with common security weaknesses, analyzing post-synthesis security risk, identifying attack stimuli, locating the source of weakness, and generating security testbenches. This allows customers to detect security degradation before silicon is built and to prove, rather than assume, that their silicon is secure.

What application areas are your strongest?

We are strongest in high-assurance and mission-critical applications, including defense, aerospace, critical infrastructure, and automotive. Our platform is particularly valuable for designs where a single security failure can lead to system compromise, safety risks, or national security impacts.

What keeps your customers up at night?

Customers worry about unknown security vulnerabilities in their designs, which can lead to system recalls and the remanufacturing of chips. They are concerned about auditability, compliance, and proving their chips are secure for their customers.

What does the competitive landscape look like and how do you differentiate?

Existing solutions fall into three categories:
● Traditional EDA tools: Focused on functional correctness and performance, not adversarial security behavior.
● Manual security reviews: Expert-driven but slow, subjective, limited in coverage, and unscalable.
● Point security tools: Designed for RTL analysis and narrow in scope, i.e., focus on a subset of collaterals generated by Analyz-N .

Silicon Assurance differentiates by being security-first, implementation-aware, scalable, exhaustive, and autonomous, positioning it to define hardware security verification as a core requirement for silicon designs rather than an ad-hoc review step.

What new features or technology are you working on?

We are combining GenAI-driven formal analysis, rule-based analysis, fuzzing-style exploration, and AI-driven reasoning into the Analyz-N platform to make it accessible to entry-level security verification engineers. Furthermore, such technologies will enable Analyz-N to detect various threat categories. The platform’s results are repeatable, verifiable, accurate, and scalable to modern hardware designs, enabling forward integration into commercial design flows.

How do customers normally engage with your company?

Customers typically engage through pilot evaluations on real IPs or SoCs, where we demonstrate concrete, design-specific findings. From there, we move into enterprise deployments aligned with existing verification flows and simulators. Our engagements are highly collaborative. We work closely with security, verification, and design teams to integrate Analyz-N into their design and verification lifecycle.

LEARN MORE

Also Read:

CEO Interview with Naama BAK of Understand Tech

CEO Interview with Dr. Heinz Kaiser of Schott

CEO Interview with Moshe Tanach of NeuReality


Podcast EP330: An Overview of DVCon U.S. 2026 with Xiaolin Chen

Podcast EP330: An Overview of DVCon U.S. 2026 with Xiaolin Chen
by Daniel Nenni on 02-06-2026 at 10:00 am

Daniel is joined by Xiaolin Chen, Senior Director of Technical Product Management for Formal Solutions at Synopsys. She has over 20 years of experience applying formal technology in verification and partnering with customers to identify opportunities where formal methods are best suited to solve complex verification challenges. She is currently serving as the General Chair for DVCon U.S. 2026 and has been on the steering committee and has been serving as a member of technical program committees since 2019. She has also authored more than 20 conference papers and holds a patent.

Dan explores the upcoming DVCon event with Xiaolin. The conference will be held at the Hyatt Regency Santa Clara from March 2-5, 2026. Xiaolin explains that the Hyatt Regency is a new venue for the event. Its larger size will allow more events and larger sessions. The conference has grown quite a bit and this year there are four Platinum sponsors. Xiaolin explains that this allows a larger number of sponsored events creating a very full agenda.

Dan reviews the keynotes, tutorials, workshops, paper sessions, interactive poster sessions, and panels that are planned. There is also a hackathon. Xiaolin explains there are several high-profile keynote speakers. A main focus for many conference events is the application of AI. Dan and Xiaolin explore the implications of AI for design verification in some detail.

You can register for DVCon USA here.


The Risk of Not Optimizing Clock Power

The Risk of Not Optimizing Clock Power
by Mike Gianfagna on 02-06-2026 at 6:00 am

The Risk of Not Optimizing Clock Power

Clock power is rarely the issue teams expect to limit advanced-node designs. Yet in many chips today, over-driven clock networks quietly consume disproportionate power, reduce thermal headroom, and can constrain achievable frequency. And all while passing traditional sign-off checks and often remaining locked in through tapeout.

Because clocks toggle continuously and span the entire chip, inefficiencies in the clock network compound relentlessly. Once locked into silicon, these costs are paid every cycle, in every product, for the life of the design. Let’s take a closer look at the risk of not optimizing clock power.

Power as a Silent Constraint

Power is a universal constraint for advanced chip designs, increasingly determining whether performance targets, thermal limits, and product differentiation can actually be achieved. While designers spend significant effort optimizing functional logic, the clock network often escapes the same level of scrutiny because it is assumed to be good enough once timing closes, even though power behavior is no longer explicitly examined in a detailed, clock-network-specific way.

Why Clock Power Is Getting Worse

The massive processing required by AI workloads has turbo-charged the problem. There are many dials to turn to optimize power. It turns out the energy consumed by the clock network in most advanced designs is a substantial contributor to the power problem – often without being explicitly identified as such during sign-off, even as logic efficiency improves and overall power budgets tighten. Clock power can quietly erode available budget even when all timing requirements appear to be met.

Clock Networks: A Disproportionate Power Consumer

You can do your own research on the issue. I found some very useful data at numberanalytics.com and embedded.com. A few key statistics are worth repeating:

  • In modern chips, clock networks can account for over 50% of the total dynamic power consumption.

Several factors contribute to the power consumption of clock networks:

  • Capacitance: The capacitance of the clock network affects how much power is consumed during signal transitions.
  • Switching Activity: The frequency at which the clock toggles directly impacts power usage. Higher switching rates lead to increased power consumption.
  • Wire Length: Longer wires increase resistance, which can lead to higher power dissipation.

Capacitance, switching activity (or clock speed), and wire length are all familiar problems for advanced chip design. They all drive power consumption in the wrong direction.

It’s also important to note that unlike logic nets, which may toggle infrequently, the clock net has a 100 percent activity factor. Every inefficiency in gate sizing, topology, or loading assumption is therefore paid continuously. Small amounts of excess drive strength can translate into significant power loss when multiplied across deep clock networks and sustained over billions of cycles. This makes clock power uniquely unforgiving: inefficiencies do not average out over time.

Where Traditional Clock Power Analysis Falls Short

Let’s look at some of the basic steps of how the clock tree is implemented:

  1. RTL design of clock network, considering items such as net connectivity, number of nodes, required drive strength and estimate wiring load.
  2. Synthesize the clock tree.
  3. Perform post layout extraction and timing analysis to verify the clock network meets timing specifications. At this stage, the clock network is typically judged complete based on timing closure alone. Once this judgment is made, opportunities to revisit clock power late in the flow are often avoided due to perceived risk.

This is clearly a simplified view of the steps involved, but what happens in step 3 presents a significant opportunity. Tools like static timing analysis can verify overall performance of the clock network to ensure timing is met. But at what cost?

The opportunity here lies in the clock gates that are inserted during synthesis. The choices made by the synthesis tool are influenced by the drive requirements of the elements in each clock tree and an estimate of the loading effects of the wiring. The amount of wiring has become quite large for advanced designs, so small errors in estimates can become big discrepancies in the final layout.

In any clock network, there will be drivers that are too large for the required load, resulting in wasted power. There will also be undersized drivers that will struggle to keep up, also wasting power, though over-sizing is far more common in practice.

A useful analogy here is internal combustion engines in automobiles. If the engine is too small for the car’s weight, it will struggle and waste gasoline. If it’s too powerful for the car’s weight, gas will also be wasted. There is an optimal engine size for a given car configuration from a fuel efficiency point of view. The same is true for clock network drivers.

This combination of scale, electrical complexity, and late-stage risk has made clock power one of the hardest problems to address in practice. The post-layout netlist of the clock network contains all the actual wiring load and clock drivers chosen by the synthesis tool. Some of those drivers will be too small for the required load and some will be too large, based on the difference between the estimate and actual wire loads.

The most reliable way to find these issues is to perform a SPICE-level analysis on the clock network. Historically, this level of electrically accurate analysis has been impractical at full clock-network scale. Until now, this level of analysis was impractical for real designs.

Turning Clock Power Risk into an Addressable Problem

ClockEdge has developed technology that makes electrically accurate clock power analysis practical at full scale – an area that has traditionally been out of reach. Instead of relying on inferred models or averaged assumptions, this approach evaluates clock power behavior directly across complete clock networks under realistic post-layout conditions.

Crucially, this visibility can be applied early in the clock design process, beginning at clock tree synthesis (CTS). By identifying over-driven paths and unnecessary margin at CTS, teams can make informed sizing and topology decisions before inefficiencies are propagated and locked in. This early intervention reduces downstream power waste and minimizes the need for disruptive late-stage changes.

As the design matures, the same electrically grounded analysis continues to provide value, allowing teams to validate clock power assumptions and refine optimization decisions with confidence. By anchoring analysis in actual electrical behavior, clock power optimization becomes a controlled, data-driven exercise rather than a risky late-stage guess – while preserving timing integrity throughout the flow.

Clock power is no longer a secondary concern- it is a growing source of hidden risk in advanced-node designs. When conservative assumptions are embedded early in CTS and left unexamined, unnecessary power consumption can become locked in, quietly constraining performance, thermal headroom, and predictability. The risk of not optimizing clock power is that its impact often goes unnoticed until it’s too late to change.

To Learn More

If clock power is expected to challenge your next design, ClockEdge provides a practical solution to evaluate and optimize clock power using its vPower solution. To see how this technology applies to your clock network before inefficiencies are locked in, request a demo with the ClockEdge team. This is how you can avoid the risk of not optimizing clock power.

Also Read:

Taming Advanced Node Clock Network Challenges: Jitter

Taming Advanced Node Clock Network Challenges: Duty Cycle

How vHelm Delivers an Optimized Clock Network


Quadric’s Recent Momentum & Funding Success

Quadric’s Recent Momentum & Funding Success
by Daniel Nenni on 02-05-2026 at 10:00 am

Quadric Chimera GPNPU

Quadric®, Inc., headquartered in Burlingame, California, is accelerating its position as a leading provider of programmable AI inference processor intellectual property (IP) and development tools for on-device AI workloads. The company announced an oversubscribed $30 million Series C funding round, bringing total capital raised to $72 million. This round was led by the ACCELERATE Fund (managed by BEENEXT Capital Management), with participation from returning investors: Uncork Capital and Pear VC, as well as new investors: Volta, Gentree, Wanxiang America, Pivotal, and Silicon Catalyst Ventures.

According to Quadric and reporting by industry outlets, this Series C comes as product revenues more than tripled in 2025 compared to 2024, reflecting strong adoption of the company’s technology across multiple application areas including edge large language models (LLMs), automotive AI processing, and enterprise vision workloads. Executives point to “accelerating design-win momentum” as proof of increasing market traction for Quadric Chimera™ general-purpose neural processor architecture.

Quadric positions its Chimera™ IP as a fully programmable alternative to fixed-function NPUs that typically dominate the edge AI landscape. Chimera supports both traditional DSP and AI inference tasks on a unified architecture, enabling semiconductor designers to build chips capable of running vision models and on-device LLMs (including models up to ~30B parameters). The architecture is scalable from 1 trillion operations per second (TOPS) up to 864 TOPS, with automotive-grade, safety-enhanced options for ASIL compliance.

Management has emphasized that this software-centric, programmable approach helps shield customers from the risk of obsolescence as model architectures evolve— an advantage over rigid accelerators in a fast-changing landscape.

Strategic License Wins & Ecosystem Expansion

In conjunction with the funding announcement, Quadric also disclosed new licensing wins that underscore the diversity of its target markets. One new licensee is a leading Asia-based edge-server LLM silicon provider, reflecting demand for on-device language model inference at scale.

A second major engagement comes from TIER IV, Inc. of Japan, a leader in autonomous driving software. TIER IV has licensed the Chimera AI processor software development kit (SDK) to evaluate and optimize future iterations of Autoware®, the open-source autonomous vehicle software stack the company pioneers.

The TIER IV engagement illustrates Quadric’s push into automotive and autonomous system markets, where efficient, programmable AI compute is increasingly a differentiator. As AI workloads proliferate in next-generation vehicles—handling perception, planning, and control—Quadric’s SDK provides a pathway for developers to optimize inference for the specific needs of autonomous platforms.

Leadership & Organizational Developments

Quadric has been strengthening its leadership and engineering bench. In December 2025, the company announced the appointment of Ravi Chakaravarthy as Vice President of Software Engineering, a role focused on driving the development of the embedded AI software stack that underpins the Chimera IP and toolchain.

Around the same time, Quadric also added Joachim Kunkel as an independent member of its Board of Directors. Kunkel brings deep industry experience, particularly from his 18-plus years leading Synopsys’ semiconductor IP division. His participation signals Quadric’s intent to scale strategically and benefit from seasoned guidance as it expands its footprint in the AI processor IP ecosystem.

Market Context & Competitive Stance

Quadric’s progress comes against a backdrop of consolidation in the neural processing unit (NPU) IP market. According to industry reporting, the number of startups offering NPU IP has declined as competition intensifies and differentiation becomes more difficult. Quadric’s blend of programmable hardware and comprehensive software tooling is cited by some observers as a key competitive advantage that helps it stand out from both fixed-function accelerator vendors and other IP licensors.

Bottom Line: With this recent funding and expanding customer engagements, Quadric appears positioned to continue its strategy of licensing the Chimera IP and SDK to chipset designers tackling the diverse demands of edge AI, in areas ranging from smart devices and industrial systems to advanced driver assistance and autonomous vehicles.

Quadric also has a new website which, as a website connoisseur, I find quite clever. Click here and check it out.

Also Read:

Quadric: Revolutionizing Edge AI

Legacy IP Providers Struggle to Solve the NPU Dilemna

Recent AI Advances Underline Need to Futureproof Automotive AI


Beyond Transformers. Physics-Centric Machine Learning for Analog

Beyond Transformers. Physics-Centric Machine Learning for Analog
by Bernard Murphy on 02-05-2026 at 6:00 am

Physics based learning

Physical AI is an emerging hot trend, popularly associated with robotics though it has much wider scope than compute systems interacting with the physical world. For any domain in which analysis rests on differential equations (foundational in physics), the transformer-based systems behind LLMs are not the best fit for machine learning. Analog design analysis is a good example. SPICE simulators determine circuit behavior by solving a set of differential equations governed by basic current and voltage laws applied to circuit components together with boundary conditions. Effective learning here requires different approaches.

Machine learning methods in analog

The obvious starting point is to run a bunch of SPICE sims based on some kind of statistical sampling (across parameter and boundary condition variances say) as input to ML training. Monte Carlo analysis is the logical endpoint of this approach. For learning purposes, such methods are effective but significantly time and resource consuming. Carefully reducing samples will reduce runtimes, but what will you miss in-between those samples?

A different approach effectively builds the Newton (-Raphson) iteration method behind SPICE into the training process, enabling automated refinement in gradient-based optimization. (Concerns about correlation with signoff SPICE can be addressed through cross-checks when needed.)

Newton ‘s method is the gold standard in circuit solving but is iterative which could dramatically slow learning without acceleration. Parallelism provided the big breakthrough that launched LLMs to fame. Could the same idea work here? The inherent non-linear nature of analog circuit equations has proven to be a major challenge in effectively using parallelism for real circuits. However a recent paper has introduced an algorithm allowing for machine learning with embedded circuit sim to more fully exploit parallelism using hardware platforms such as GPUs.

Beyond sampling to deep physics modeling

At first glance, simply embedding circuit solving inside a machine learning (ML) algorithm seems not so different from running sims externally then learning from those results. Such a method would still be statistical sampling, packaged in a different way. Mach42  have a different approach in their Discovery Platform which they claim leads to higher accuracy, stability and predictive power in results. I see good reasons to believe their claim.

Brett Larder (co-founder and CTO at Mach42) gave me two hints. First, they aim for continuous time modeling during ML training, whereas most digital methods discretize time. They have observed that conventional methods predict results which are noisy, irregular, even inaccurate, whereas in their algorithm each training point can contain measurements at a different set of time values, rather than requiring that each index on an input/output always correspond to the same time. Training data distributed across timepoints rather than bunched into discretized time steps; I can see how that could lead to more accurate learning.

Second, they aim to learn parameters of the differential equations describing the circuit, rather than learning coarse statistical sampling of input to output results. This is very interesting. Rather than learning to drive a model for statistical interpolation, instead it learns refined differential equations. For me this is intuitively more likely to be robust to changes under different physical conditions and truly feels like deep physics modeling.

The cherry on the cake is that this algorithm can be parallelized to run on a GPU, making training very time-efficient, from SerDes at 2 hours to a complex automotive PMIC at 20 hours, all delivering accuracy at 90% or better. The models produced by this training can run inside SPICE or Verilog-A sims, orders of magnitude faster than classical AMS simulations.

Payoff

Active power management in devices from smartphones to servers depends on LDOs providing regulated power to serve multiple voltage levels. A challenge in designing these circuits is that the transfer function from input to output can shift as current draw and other operating conditions change. Since the goal of an LDO is to provide a stable output, compensating for such shifts is a major concern in LDO design.

The standard approach to characterizing an LDO is to run a bunch of sweeps over circuit parameters to determine how well compensation will manage across a range of load and operating conditions. These simulations consume significant time and resources and can miss potential issues in-between sweep steps. In contrast, the learned dynamic model created by the Discovery Platform provides much more accurate estimation across the range, so that anomalies are much more likely to be detected. Moreover, changes in behavior as parameters are varied can be viewed in real-time thanks to these fast dynamic models.

Very nice – moving beyond transformer-based learning to real physics-based learning. You can read more about this new modelling approach in Mach42’s latest blog HERE.

Also Read:

2026 Outlook with Paul Neil of Mach42

Video EP12: How Mach42 is Changing Analog Verification with Antun Domic

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder


2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys

2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys
by Daniel Nenni on 02-04-2026 at 10:00 am

Abhijeet Headshot

Tell us a little bit about yourself and your company.

My name’s Abhijeet Chakraborty and I’m Vice President of Engineering at Synopsys. I led the development of Synopsys Design Compiler-NXT, the industry’s leading synthesis product, and now oversee the company’s multi-die and 3DIC product portfolio. Throughout my career, I’ve held a number of R&D roles in the semiconductor industry including at influential startups like Magma Design Automation and Monterey Design.

I’ve had a front‑row seat to how the semiconductor landscape is evolving—and how Synopsys is evolving with it. As products increasingly become intelligent, software‑defined systems – be it a robot, car, data center, or something in between – we are empowering our customers to “re-engineer their engineering” so they can meet unprecedented complexity in the AI era.

What was the most exciting high point of 2025 for your company?

Without question, the defining moment of 2025 for Synopsys was completing our acquisition of Ansys. This was not just a business milestone — it reshaped Synopsys into the leader in engineering solutions from silicon to systems by combining the leaders in electronic design automation (EDA), design IP, and simulation and analysis.

And our combination comes at a pivotal time for the industry. Engineering today’s intelligent systems is not only a silicon and software challenge, but a physics challenge too. As a combined company, we can bring together both digital and physical design, so engineering teams can innovate faster, better, and with the whole picture in mind – from silicon to systems.

What do you think the biggest growth area for 2026 will be, and why? How is your company’s work addressing this growth?

In 2026, I have my eye on the intersection of AI and multi‑die design — not only on multi‑die as an enabler for the AI era, but also on AI as an enabler for scalable multi‑die engineering.

Multi-die designs can deliver far greater performance and flexibility than monolithic chips, capable of supporting the soaring compute demands and AI-driven workloads. The challenge now is to manage the architectural and multiphysical complexities that heterogeneous integrations come with, and which are far beyond what traditional workloads can manage.

In summary, it is really using AI to develop AI – helping teams to explore architectures faster, optimize interconnects, and account for electrical, thermal, and mechanical effects early on and accurately with speed and scale.

Will you participate in conferences in 2026? Same or more as 2025?

Absolutely — 2026 will be a very active year. In a few weeks, I’ll be giving the opening keynote at the Chiplet Summit in Santa Clara on Wednesday, February 18, where I’ll share my perspective on how AI is transforming multi‑die design through advanced automation. This will be my third year at the summit and I am looking forward to speaking about this consequential topic. I especially enjoy the exhibits and panels as an opportunity to discuss challenges and solutions with others in the ecosystem.

Beyond Chiplet Summit, Synopsys will be present across industry events, especially as we bring the Synopsys and Ansys communities together. Our inaugural Synopsys Converge conference in March 2026 will be a major gathering point — bringing Synopsys User Group “SNUG” Silicon Valley, Simulation World, and Executive Forum under the same roof for one flagship event.

Learn more: Synopsys at Chiplet Summit 2026

Register: Synopsys Converge: Re-engineering the Future

Also Read:

Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact

Synopsys’ Secure Storage Solution for OTP IP

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation


The Launch of RISC-V Now! A New Chapter in Open Computing

The Launch of RISC-V Now! A New Chapter in Open Computing
by Daniel Nenni on 02-04-2026 at 8:00 am

RISC Now Andes

On February 3, 2026, Andes Technology officially announced the launch of RISC-V Now!, a new global conference series designed around the next phase of RISC-V adoption: real-world deployment and commercial scaling. This initiative marks a shift from exploratory and research-focused events toward practical, production-oriented exchanges that help engineers, architects, and decision-makers navigate the realities of building shipping systems on a RISC-V foundation.

At its core, RISC-V Now! responds to an industry that is rapidly moving past foundational experimentation into crafting competitive products using an open instruction set architecture (ISA). Unlike more conceptual workshops or broad ecosystem summits, this series emphasizes deployment challenges, system-level tradeoffs, and lessons learned from production-scale platforms. The first marquee gathering will take place in Silicon Valley (San Jose) on April 20–21, 2026, at the DoubleTree by Hilton, with additional regional events scheduled in Hsinchu (April 15), Shanghai (May 12), and Beijing (May 14).

Context: Why RISC-V Matters

To appreciate the significance of RISC-V Now!, it helps to understand RISC-V’s broader trajectory. RISC-V is an open-source instruction set architecture that emerged from academic research at UC Berkeley and has since gained global momentum. Its open nature means companies can implement it without costly licensing fees or restrictive agreements, a contrast to proprietary ISAs like ARM or x86. This has already led to explosive adoption in embedded devices and microcontrollers, where billions of RISC-V cores now ship annually.

Over the past few years, support for RISC-V has expanded in both software infrastructure and hardware capabilities. For example, major Linux distributions such as Debian have begun officially supporting 64-bit RISC-V, and advanced kernel patches (like ZALASR) are moving toward mainline inclusion — signs of maturation in the open ecosystem. At the same time, companies such as SiFive and StarFive are pushing higher-performance RISC-V designs aimed at AI, IoT, and data center usage.

RISC-V Now! emerges against this backdrop at a moment when RISC-V is no longer just a promising idea but a practical choice for commercial products. The conference series directly tackles questions that teams face when moving from prototypes to shipped systems: how to balance performance, power efficiency, cost, software integration, validation, and ecosystem tooling.

What the Conference Focuses On

According to Andes Technology, the curriculum of RISC-V Now! is tailored for practitioner-level discussions rather than pure theory. Key themes include:

System-Level Tradeoffs: Choosing CPU and SoC strategies that meet specific use-case constraints (e.g., AI acceleration pools vs. energy efficiency corridors).

Software Enablement Challenges: Real issues encountered when porting, optimizing, and maintaining software stacks at scale on RISC-V hardware.

Lessons from Production Systems: Case studies and insights from companies that have already brought RISC-V products to market.

By foregrounding deployment realities rather than just technological promise, RISC-V Now! aims to bridge the gap between enthusiasm for open ISAs and the concrete needs of engineering teams tasked with delivering competitive products.

Industry Implications

The launch of this series further indicates that RISC-V has transitioned from novel architecture to mainstream contender. Once primarily associated with niche and embedded applications, RISC-V is now being positioned as a foundation for general computing, AI, automotive systems, and data centers. Its open nature not only reduces barriers to entry but also allows customization that proprietary ISAs can’t match.

Bottom line: This trend is reflected across the tech ecosystem: from expanded Linux support to growing commercial IP offerings and major ecosystem events around RISC-V architectures. RISC-V’s momentum is unmistakable, and initiatives like RISC-V Now! are helping solidify its place in the production pipelines of tomorrow’s computing platforms.

REGISTER NOW

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

RISC-V: Powering the Era of Intelligent General Computing

Navigating SoC Tradeoffs from IP to Ecosystem