RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance
by Daniel Nenni on 02-08-2026 at 12:00 pm

Dr. Raj Gautam Dutta


Dr. Raj Gautam Dutta is the Co-Founder and Chief Executive Officer of Silicon Assurance, where he defines the company’s strategic direction and leads its technology and product vision. He is responsible for driving the development of differentiated hardware security solutions, executing growth and partnership strategies, and positioning the company within the global semiconductor and security ecosystem. With over eight years of experience in hardware security innovation, Dr. Dutta has a proven record of translating advanced research into deployable, market-ready technologies. His background in technology transfer and strategic technology analysis enables Silicon Assurance to deliver solutions that combine deep technical rigor with commercial and mission-critical relevance.

Tell us about your company

Silicon Assurance is a Security EDA company dedicated to strengthening semiconductor supply chain trust by enabling organizations to establish verifiable confidence in the security of the silicon they design, integrate, procure, and deploy. Our flagship platform, Analyz-N™, is a gate-level security assurance platform that automates the identification of security-relevant assets, analyzes post-synthesis security risk, and generates audit-ready assurance reports to prove customers’ silicon is verifiably secure, not merely assumed secure.

What problems are you solving?

Modern microelectronics underpin mission-critical cyber and national security systems, yet hardware security assurance has not kept pace with the sophistication of emerging threats. The complexity of AI accelerators, the rise of multi-chiplet architectures, and pervasive third-party IP reuse have significantly expanded the hardware attack surface and weakened traditional trust boundaries. Current practices rely heavily on RTL-focused point tools and static checklists, which fail to capture evolving hardware threats and the security implications of EDA design and verification flows that were not built with adversarial behavior in mind. As a result, vulnerabilities—such as leakage paths, privilege escalation, hardware Trojans, or countermeasures that fail to propagate through synthesis—can remain undetected until post-deployment.

Silicon Assurance is developing Analyz-NTM, an end-to-end automated gate-level security assurance EDA platform designed to prove the security of microelectronics. Our flagship platform’s capabilities include automatically identifying security-relevant assets, correlating them with common security weaknesses, analyzing post-synthesis security risk, identifying attack stimuli, locating the source of weakness, and generating security testbenches. This allows customers to detect security degradation before silicon is built and to prove, rather than assume, that their silicon is secure.

What application areas are your strongest?

We are strongest in high-assurance and mission-critical applications, including defense, aerospace, critical infrastructure, and automotive. Our platform is particularly valuable for designs where a single security failure can lead to system compromise, safety risks, or national security impacts.

What keeps your customers up at night?

Customers worry about unknown security vulnerabilities in their designs, which can lead to system recalls and the remanufacturing of chips. They are concerned about auditability, compliance, and proving their chips are secure for their customers.

What does the competitive landscape look like and how do you differentiate?

Existing solutions fall into three categories:
● Traditional EDA tools: Focused on functional correctness and performance, not adversarial security behavior.
● Manual security reviews: Expert-driven but slow, subjective, limited in coverage, and unscalable.
● Point security tools: Designed for RTL analysis and narrow in scope, i.e., focus on a subset of collaterals generated by Analyz-N .

Silicon Assurance differentiates by being security-first, implementation-aware, scalable, exhaustive, and autonomous, positioning it to define hardware security verification as a core requirement for silicon designs rather than an ad-hoc review step.

What new features or technology are you working on?

We are combining GenAI-driven formal analysis, rule-based analysis, fuzzing-style exploration, and AI-driven reasoning into the Analyz-N platform to make it accessible to entry-level security verification engineers. Furthermore, such technologies will enable Analyz-N to detect various threat categories. The platform’s results are repeatable, verifiable, accurate, and scalable to modern hardware designs, enabling forward integration into commercial design flows.

How do customers normally engage with your company?

Customers typically engage through pilot evaluations on real IPs or SoCs, where we demonstrate concrete, design-specific findings. From there, we move into enterprise deployments aligned with existing verification flows and simulators. Our engagements are highly collaborative. We work closely with security, verification, and design teams to integrate Analyz-N into their design and verification lifecycle.

LEARN MORE

Also Read:

CEO Interview with Naama BAK of Understand Tech

CEO Interview with Dr. Heinz Kaiser of Schott

CEO Interview with Moshe Tanach of NeuReality


Podcast EP330: An Overview of DVCon U.S. 2026 with Xiaolin Chen

Podcast EP330: An Overview of DVCon U.S. 2026 with Xiaolin Chen
by Daniel Nenni on 02-06-2026 at 10:00 am

Daniel is joined by Xiaolin Chen, Senior Director of Technical Product Management for Formal Solutions at Synopsys. She has over 20 years of experience applying formal technology in verification and partnering with customers to identify opportunities where formal methods are best suited to solve complex verification challenges. She is currently serving as the General Chair for DVCon U.S. 2026 and has been on the steering committee and has been serving as a member of technical program committees since 2019. She has also authored more than 20 conference papers and holds a patent.

Dan explores the upcoming DVCon event with Xiaolin. The conference will be held at the Hyatt Regency Santa Clara from March 2-5, 2026. Xiaolin explains that the Hyatt Regency is a new venue for the event. Its larger size will allow more events and larger sessions. The conference has grown quite a bit and this year there are four Platinum sponsors. Xiaolin explains that this allows a larger number of sponsored events creating a very full agenda.

Dan reviews the keynotes, tutorials, workshops, paper sessions, interactive poster sessions, and panels that are planned. There is also a hackathon. Xiaolin explains there are several high-profile keynote speakers. A main focus for many conference events is the application of AI. Dan and Xiaolin explore the implications of AI for design verification in some detail.

You can register for DVCon USA here.


The Risk of Not Optimizing Clock Power

The Risk of Not Optimizing Clock Power
by Mike Gianfagna on 02-06-2026 at 6:00 am

The Risk of Not Optimizing Clock Power

Clock power is rarely the issue teams expect to limit advanced-node designs. Yet in many chips today, over-driven clock networks quietly consume disproportionate power, reduce thermal headroom, and can constrain achievable frequency. And all while passing traditional sign-off checks and often remaining locked in through tapeout.

Because clocks toggle continuously and span the entire chip, inefficiencies in the clock network compound relentlessly. Once locked into silicon, these costs are paid every cycle, in every product, for the life of the design. Let’s take a closer look at the risk of not optimizing clock power.

Power as a Silent Constraint

Power is a universal constraint for advanced chip designs, increasingly determining whether performance targets, thermal limits, and product differentiation can actually be achieved. While designers spend significant effort optimizing functional logic, the clock network often escapes the same level of scrutiny because it is assumed to be good enough once timing closes, even though power behavior is no longer explicitly examined in a detailed, clock-network-specific way.

Why Clock Power Is Getting Worse

The massive processing required by AI workloads has turbo-charged the problem. There are many dials to turn to optimize power. It turns out the energy consumed by the clock network in most advanced designs is a substantial contributor to the power problem – often without being explicitly identified as such during sign-off, even as logic efficiency improves and overall power budgets tighten. Clock power can quietly erode available budget even when all timing requirements appear to be met.

Clock Networks: A Disproportionate Power Consumer

You can do your own research on the issue. I found some very useful data at numberanalytics.com and embedded.com. A few key statistics are worth repeating:

  • In modern chips, clock networks can account for over 50% of the total dynamic power consumption.

Several factors contribute to the power consumption of clock networks:

  • Capacitance: The capacitance of the clock network affects how much power is consumed during signal transitions.
  • Switching Activity: The frequency at which the clock toggles directly impacts power usage. Higher switching rates lead to increased power consumption.
  • Wire Length: Longer wires increase resistance, which can lead to higher power dissipation.

Capacitance, switching activity (or clock speed), and wire length are all familiar problems for advanced chip design. They all drive power consumption in the wrong direction.

It’s also important to note that unlike logic nets, which may toggle infrequently, the clock net has a 100 percent activity factor. Every inefficiency in gate sizing, topology, or loading assumption is therefore paid continuously. Small amounts of excess drive strength can translate into significant power loss when multiplied across deep clock networks and sustained over billions of cycles. This makes clock power uniquely unforgiving: inefficiencies do not average out over time.

Where Traditional Clock Power Analysis Falls Short

Let’s look at some of the basic steps of how the clock tree is implemented:

  1. RTL design of clock network, considering items such as net connectivity, number of nodes, required drive strength and estimate wiring load.
  2. Synthesize the clock tree.
  3. Perform post layout extraction and timing analysis to verify the clock network meets timing specifications. At this stage, the clock network is typically judged complete based on timing closure alone. Once this judgment is made, opportunities to revisit clock power late in the flow are often avoided due to perceived risk.

This is clearly a simplified view of the steps involved, but what happens in step 3 presents a significant opportunity. Tools like static timing analysis can verify overall performance of the clock network to ensure timing is met. But at what cost?

The opportunity here lies in the clock gates that are inserted during synthesis. The choices made by the synthesis tool are influenced by the drive requirements of the elements in each clock tree and an estimate of the loading effects of the wiring. The amount of wiring has become quite large for advanced designs, so small errors in estimates can become big discrepancies in the final layout.

In any clock network, there will be drivers that are too large for the required load, resulting in wasted power. There will also be undersized drivers that will struggle to keep up, also wasting power, though over-sizing is far more common in practice.

A useful analogy here is internal combustion engines in automobiles. If the engine is too small for the car’s weight, it will struggle and waste gasoline. If it’s too powerful for the car’s weight, gas will also be wasted. There is an optimal engine size for a given car configuration from a fuel efficiency point of view. The same is true for clock network drivers.

This combination of scale, electrical complexity, and late-stage risk has made clock power one of the hardest problems to address in practice. The post-layout netlist of the clock network contains all the actual wiring load and clock drivers chosen by the synthesis tool. Some of those drivers will be too small for the required load and some will be too large, based on the difference between the estimate and actual wire loads.

The most reliable way to find these issues is to perform a SPICE-level analysis on the clock network. Historically, this level of electrically accurate analysis has been impractical at full clock-network scale. Until now, this level of analysis was impractical for real designs.

Turning Clock Power Risk into an Addressable Problem

ClockEdge has developed technology that makes electrically accurate clock power analysis practical at full scale – an area that has traditionally been out of reach. Instead of relying on inferred models or averaged assumptions, this approach evaluates clock power behavior directly across complete clock networks under realistic post-layout conditions.

Crucially, this visibility can be applied early in the clock design process, beginning at clock tree synthesis (CTS). By identifying over-driven paths and unnecessary margin at CTS, teams can make informed sizing and topology decisions before inefficiencies are propagated and locked in. This early intervention reduces downstream power waste and minimizes the need for disruptive late-stage changes.

As the design matures, the same electrically grounded analysis continues to provide value, allowing teams to validate clock power assumptions and refine optimization decisions with confidence. By anchoring analysis in actual electrical behavior, clock power optimization becomes a controlled, data-driven exercise rather than a risky late-stage guess – while preserving timing integrity throughout the flow.

Clock power is no longer a secondary concern- it is a growing source of hidden risk in advanced-node designs. When conservative assumptions are embedded early in CTS and left unexamined, unnecessary power consumption can become locked in, quietly constraining performance, thermal headroom, and predictability. The risk of not optimizing clock power is that its impact often goes unnoticed until it’s too late to change.

To Learn More

If clock power is expected to challenge your next design, ClockEdge provides a practical solution to evaluate and optimize clock power using its vPower solution. To see how this technology applies to your clock network before inefficiencies are locked in, request a demo with the ClockEdge team. This is how you can avoid the risk of not optimizing clock power.

Also Read:

Taming Advanced Node Clock Network Challenges: Jitter

Taming Advanced Node Clock Network Challenges: Duty Cycle

How vHelm Delivers an Optimized Clock Network


Quadric’s Recent Momentum & Funding Success

Quadric’s Recent Momentum & Funding Success
by Daniel Nenni on 02-05-2026 at 10:00 am

Quadric Chimera GPNPU

Quadric®, Inc., headquartered in Burlingame, California, is accelerating its position as a leading provider of programmable AI inference processor intellectual property (IP) and development tools for on-device AI workloads. The company announced an oversubscribed $30 million Series C funding round, bringing total capital raised to $72 million. This round was led by the ACCELERATE Fund (managed by BEENEXT Capital Management), with participation from returning investors: Uncork Capital and Pear VC, as well as new investors: Volta, Gentree, Wanxiang America, Pivotal, and Silicon Catalyst Ventures.

According to Quadric and reporting by industry outlets, this Series C comes as product revenues more than tripled in 2025 compared to 2024, reflecting strong adoption of the company’s technology across multiple application areas including edge large language models (LLMs), automotive AI processing, and enterprise vision workloads. Executives point to “accelerating design-win momentum” as proof of increasing market traction for Quadric Chimera™ general-purpose neural processor architecture.

Quadric positions its Chimera™ IP as a fully programmable alternative to fixed-function NPUs that typically dominate the edge AI landscape. Chimera supports both traditional DSP and AI inference tasks on a unified architecture, enabling semiconductor designers to build chips capable of running vision models and on-device LLMs (including models up to ~30B parameters). The architecture is scalable from 1 trillion operations per second (TOPS) up to 864 TOPS, with automotive-grade, safety-enhanced options for ASIL compliance.

Management has emphasized that this software-centric, programmable approach helps shield customers from the risk of obsolescence as model architectures evolve— an advantage over rigid accelerators in a fast-changing landscape.

Strategic License Wins & Ecosystem Expansion

In conjunction with the funding announcement, Quadric also disclosed new licensing wins that underscore the diversity of its target markets. One new licensee is a leading Asia-based edge-server LLM silicon provider, reflecting demand for on-device language model inference at scale.

A second major engagement comes from TIER IV, Inc. of Japan, a leader in autonomous driving software. TIER IV has licensed the Chimera AI processor software development kit (SDK) to evaluate and optimize future iterations of Autoware®, the open-source autonomous vehicle software stack the company pioneers.

The TIER IV engagement illustrates Quadric’s push into automotive and autonomous system markets, where efficient, programmable AI compute is increasingly a differentiator. As AI workloads proliferate in next-generation vehicles—handling perception, planning, and control—Quadric’s SDK provides a pathway for developers to optimize inference for the specific needs of autonomous platforms.

Leadership & Organizational Developments

Quadric has been strengthening its leadership and engineering bench. In December 2025, the company announced the appointment of Ravi Chakaravarthy as Vice President of Software Engineering, a role focused on driving the development of the embedded AI software stack that underpins the Chimera IP and toolchain.

Around the same time, Quadric also added Joachim Kunkel as an independent member of its Board of Directors. Kunkel brings deep industry experience, particularly from his 18-plus years leading Synopsys’ semiconductor IP division. His participation signals Quadric’s intent to scale strategically and benefit from seasoned guidance as it expands its footprint in the AI processor IP ecosystem.

Market Context & Competitive Stance

Quadric’s progress comes against a backdrop of consolidation in the neural processing unit (NPU) IP market. According to industry reporting, the number of startups offering NPU IP has declined as competition intensifies and differentiation becomes more difficult. Quadric’s blend of programmable hardware and comprehensive software tooling is cited by some observers as a key competitive advantage that helps it stand out from both fixed-function accelerator vendors and other IP licensors.

Bottom Line: With this recent funding and expanding customer engagements, Quadric appears positioned to continue its strategy of licensing the Chimera IP and SDK to chipset designers tackling the diverse demands of edge AI, in areas ranging from smart devices and industrial systems to advanced driver assistance and autonomous vehicles.

Quadric also has a new website which, as a website connoisseur, I find quite clever. Click here and check it out.

Also Read:

Quadric: Revolutionizing Edge AI

Legacy IP Providers Struggle to Solve the NPU Dilemna

Recent AI Advances Underline Need to Futureproof Automotive AI


Beyond Transformers. Physics-Centric Machine Learning for Analog

Beyond Transformers. Physics-Centric Machine Learning for Analog
by Bernard Murphy on 02-05-2026 at 6:00 am

Physics based learning

Physical AI is an emerging hot trend, popularly associated with robotics though it has much wider scope than compute systems interacting with the physical world. For any domain in which analysis rests on differential equations (foundational in physics), the transformer-based systems behind LLMs are not the best fit for machine learning. Analog design analysis is a good example. SPICE simulators determine circuit behavior by solving a set of differential equations governed by basic current and voltage laws applied to circuit components together with boundary conditions. Effective learning here requires different approaches.

Machine learning methods in analog

The obvious starting point is to run a bunch of SPICE sims based on some kind of statistical sampling (across parameter and boundary condition variances say) as input to ML training. Monte Carlo analysis is the logical endpoint of this approach. For learning purposes, such methods are effective but significantly time and resource consuming. Carefully reducing samples will reduce runtimes, but what will you miss in-between those samples?

A different approach effectively builds the Newton (-Raphson) iteration method behind SPICE into the training process, enabling automated refinement in gradient-based optimization. (Concerns about correlation with signoff SPICE can be addressed through cross-checks when needed.)

Newton ‘s method is the gold standard in circuit solving but is iterative which could dramatically slow learning without acceleration. Parallelism provided the big breakthrough that launched LLMs to fame. Could the same idea work here? The inherent non-linear nature of analog circuit equations has proven to be a major challenge in effectively using parallelism for real circuits. However a recent paper has introduced an algorithm allowing for machine learning with embedded circuit sim to more fully exploit parallelism using hardware platforms such as GPUs.

Beyond sampling to deep physics modeling

At first glance, simply embedding circuit solving inside a machine learning (ML) algorithm seems not so different from running sims externally then learning from those results. Such a method would still be statistical sampling, packaged in a different way. Mach42  have a different approach in their Discovery Platform which they claim leads to higher accuracy, stability and predictive power in results. I see good reasons to believe their claim.

Brett Larder (co-founder and CTO at Mach42) gave me two hints. First, they aim for continuous time modeling during ML training, whereas most digital methods discretize time. They have observed that conventional methods predict results which are noisy, irregular, even inaccurate, whereas in their algorithm each training point can contain measurements at a different set of time values, rather than requiring that each index on an input/output always correspond to the same time. Training data distributed across timepoints rather than bunched into discretized time steps; I can see how that could lead to more accurate learning.

Second, they aim to learn parameters of the differential equations describing the circuit, rather than learning coarse statistical sampling of input to output results. This is very interesting. Rather than learning to drive a model for statistical interpolation, instead it learns refined differential equations. For me this is intuitively more likely to be robust to changes under different physical conditions and truly feels like deep physics modeling.

The cherry on the cake is that this algorithm can be parallelized to run on a GPU, making training very time-efficient, from SerDes at 2 hours to a complex automotive PMIC at 20 hours, all delivering accuracy at 90% or better. The models produced by this training can run inside SPICE or Verilog-A sims, orders of magnitude faster than classical AMS simulations.

Payoff

Active power management in devices from smartphones to servers depends on LDOs providing regulated power to serve multiple voltage levels. A challenge in designing these circuits is that the transfer function from input to output can shift as current draw and other operating conditions change. Since the goal of an LDO is to provide a stable output, compensating for such shifts is a major concern in LDO design.

The standard approach to characterizing an LDO is to run a bunch of sweeps over circuit parameters to determine how well compensation will manage across a range of load and operating conditions. These simulations consume significant time and resources and can miss potential issues in-between sweep steps. In contrast, the learned dynamic model created by the Discovery Platform provides much more accurate estimation across the range, so that anomalies are much more likely to be detected. Moreover, changes in behavior as parameters are varied can be viewed in real-time thanks to these fast dynamic models.

Very nice – moving beyond transformer-based learning to real physics-based learning. You can read more about this new modelling approach in Mach42’s latest blog HERE.

Also Read:

2026 Outlook with Paul Neil of Mach42

Video EP12: How Mach42 is Changing Analog Verification with Antun Domic

Video EP10: An Overview of Mach42’s AI Platform with Brett Larder


2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys

2026 Outlook with Abhijeet Chakraborty VP, R&D Engineering at Synopsys
by Daniel Nenni on 02-04-2026 at 10:00 am

Abhijeet Headshot

Tell us a little bit about yourself and your company.

My name’s Abhijeet Chakraborty and I’m Vice President of Engineering at Synopsys. I led the development of Synopsys Design Compiler-NXT, the industry’s leading synthesis product, and now oversee the company’s multi-die and 3DIC product portfolio. Throughout my career, I’ve held a number of R&D roles in the semiconductor industry including at influential startups like Magma Design Automation and Monterey Design.

I’ve had a front‑row seat to how the semiconductor landscape is evolving—and how Synopsys is evolving with it. As products increasingly become intelligent, software‑defined systems – be it a robot, car, data center, or something in between – we are empowering our customers to “re-engineer their engineering” so they can meet unprecedented complexity in the AI era.

What was the most exciting high point of 2025 for your company?

Without question, the defining moment of 2025 for Synopsys was completing our acquisition of Ansys. This was not just a business milestone — it reshaped Synopsys into the leader in engineering solutions from silicon to systems by combining the leaders in electronic design automation (EDA), design IP, and simulation and analysis.

And our combination comes at a pivotal time for the industry. Engineering today’s intelligent systems is not only a silicon and software challenge, but a physics challenge too. As a combined company, we can bring together both digital and physical design, so engineering teams can innovate faster, better, and with the whole picture in mind – from silicon to systems.

What do you think the biggest growth area for 2026 will be, and why? How is your company’s work addressing this growth?

In 2026, I have my eye on the intersection of AI and multi‑die design — not only on multi‑die as an enabler for the AI era, but also on AI as an enabler for scalable multi‑die engineering.

Multi-die designs can deliver far greater performance and flexibility than monolithic chips, capable of supporting the soaring compute demands and AI-driven workloads. The challenge now is to manage the architectural and multiphysical complexities that heterogeneous integrations come with, and which are far beyond what traditional workloads can manage.

In summary, it is really using AI to develop AI – helping teams to explore architectures faster, optimize interconnects, and account for electrical, thermal, and mechanical effects early on and accurately with speed and scale.

Will you participate in conferences in 2026? Same or more as 2025?

Absolutely — 2026 will be a very active year. In a few weeks, I’ll be giving the opening keynote at the Chiplet Summit in Santa Clara on Wednesday, February 18, where I’ll share my perspective on how AI is transforming multi‑die design through advanced automation. This will be my third year at the summit and I am looking forward to speaking about this consequential topic. I especially enjoy the exhibits and panels as an opportunity to discuss challenges and solutions with others in the ecosystem.

Beyond Chiplet Summit, Synopsys will be present across industry events, especially as we bring the Synopsys and Ansys communities together. Our inaugural Synopsys Converge conference in March 2026 will be a major gathering point — bringing Synopsys User Group “SNUG” Silicon Valley, Simulation World, and Executive Forum under the same roof for one flagship event.

Learn more: Synopsys at Chiplet Summit 2026

Register: Synopsys Converge: Re-engineering the Future

Also Read:

Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact

Synopsys’ Secure Storage Solution for OTP IP

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation


The Launch of RISC-V Now! A New Chapter in Open Computing

The Launch of RISC-V Now! A New Chapter in Open Computing
by Daniel Nenni on 02-04-2026 at 8:00 am

RISC Now Andes

On February 3, 2026, Andes Technology officially announced the launch of RISC-V Now!, a new global conference series designed around the next phase of RISC-V adoption: real-world deployment and commercial scaling. This initiative marks a shift from exploratory and research-focused events toward practical, production-oriented exchanges that help engineers, architects, and decision-makers navigate the realities of building shipping systems on a RISC-V foundation.

At its core, RISC-V Now! responds to an industry that is rapidly moving past foundational experimentation into crafting competitive products using an open instruction set architecture (ISA). Unlike more conceptual workshops or broad ecosystem summits, this series emphasizes deployment challenges, system-level tradeoffs, and lessons learned from production-scale platforms. The first marquee gathering will take place in Silicon Valley (San Jose) on April 20–21, 2026, at the DoubleTree by Hilton, with additional regional events scheduled in Hsinchu (April 15), Shanghai (May 12), and Beijing (May 14).

Context: Why RISC-V Matters

To appreciate the significance of RISC-V Now!, it helps to understand RISC-V’s broader trajectory. RISC-V is an open-source instruction set architecture that emerged from academic research at UC Berkeley and has since gained global momentum. Its open nature means companies can implement it without costly licensing fees or restrictive agreements, a contrast to proprietary ISAs like ARM or x86. This has already led to explosive adoption in embedded devices and microcontrollers, where billions of RISC-V cores now ship annually.

Over the past few years, support for RISC-V has expanded in both software infrastructure and hardware capabilities. For example, major Linux distributions such as Debian have begun officially supporting 64-bit RISC-V, and advanced kernel patches (like ZALASR) are moving toward mainline inclusion — signs of maturation in the open ecosystem. At the same time, companies such as SiFive and StarFive are pushing higher-performance RISC-V designs aimed at AI, IoT, and data center usage.

RISC-V Now! emerges against this backdrop at a moment when RISC-V is no longer just a promising idea but a practical choice for commercial products. The conference series directly tackles questions that teams face when moving from prototypes to shipped systems: how to balance performance, power efficiency, cost, software integration, validation, and ecosystem tooling.

What the Conference Focuses On

According to Andes Technology, the curriculum of RISC-V Now! is tailored for practitioner-level discussions rather than pure theory. Key themes include:

System-Level Tradeoffs: Choosing CPU and SoC strategies that meet specific use-case constraints (e.g., AI acceleration pools vs. energy efficiency corridors).

Software Enablement Challenges: Real issues encountered when porting, optimizing, and maintaining software stacks at scale on RISC-V hardware.

Lessons from Production Systems: Case studies and insights from companies that have already brought RISC-V products to market.

By foregrounding deployment realities rather than just technological promise, RISC-V Now! aims to bridge the gap between enthusiasm for open ISAs and the concrete needs of engineering teams tasked with delivering competitive products.

Industry Implications

The launch of this series further indicates that RISC-V has transitioned from novel architecture to mainstream contender. Once primarily associated with niche and embedded applications, RISC-V is now being positioned as a foundation for general computing, AI, automotive systems, and data centers. Its open nature not only reduces barriers to entry but also allows customization that proprietary ISAs can’t match.

Bottom line: This trend is reflected across the tech ecosystem: from expanded Linux support to growing commercial IP offerings and major ecosystem events around RISC-V architectures. RISC-V’s momentum is unmistakable, and initiatives like RISC-V Now! are helping solidify its place in the production pipelines of tomorrow’s computing platforms.

REGISTER NOW

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

RISC-V: Powering the Era of Intelligent General Computing

Navigating SoC Tradeoffs from IP to Ecosystem


NanoIC Extends Its PDK Portfolio with First A14 Logic and eDRAM Memory PDK

NanoIC Extends Its PDK Portfolio with First A14 Logic and eDRAM Memory PDK
by Daniel Nenni on 02-04-2026 at 6:00 am

Nano IC TSMC Process Roadmap

NanoIC has announced a major expansion of its process design kit portfolio with the introduction of its first A14 logic and embedded eDRAM  memory PDK. This milestone reflects the company’s growing role in enabling advanced semiconductor design at cutting-edge technology nodes and addresses increasing industry demand for highly integrated, power-efficient system-on-chip (SoC) solutions.

As semiconductor processes continue to scale, the availability of robust and well-validated PDKs has become a critical success factor for chip designers. A PDK serves as the essential interface between a foundry’s manufacturing process and EDA tools, providing accurate models, design rules, device libraries, and verification decks. By extending its portfolio to include A14-class technology, NanoIC is positioning itself to support next-generation designs for applications such as AI, HPC, mobile processors, and advanced networking.

The newly released A14 logic PDK is designed to address the challenges associated with extreme scaling, including tighter design rules, increased variability, and complex power-performance trade-offs. NanoIC’s solution offers comprehensive transistor models, standard cell support, and reliability data that allow designers to confidently optimize performance, power consumption, and silicon area. This is especially important at advanced nodes, where even small inaccuracies in modeling can lead to costly redesigns or yield issues.

What sets this announcement apart is the inclusion of an eDRAM memory PDK alongside the logic offering. Embedded DRAM has re-emerged as an attractive memory option for advanced SoCs due to its higher density compared to SRAM and lower latency compared to off-chip DRAM. Integrating eDRAM directly on logic chips enables designers to build memory-rich architectures that improve bandwidth and energy efficiency, key requirements for data-intensive workloads such as AI inference and edge computing.

NanoIC’s A14 eDRAM PDK provides designers with the tools needed to seamlessly integrate memory blocks into complex SoC designs. The PDK includes memory cell libraries, timing and power models, and process-aware design rules that ensure manufacturability and reliability. By aligning the eDRAM PDK closely with the A14 logic process, NanoIC enables tighter co-optimization between logic and memory, reducing design complexity and accelerating time-to-market.

Another important aspect of the new PDKs is their compatibility with leading EDA platforms. NanoIC has emphasized interoperability and early design enablement, allowing customers to begin architectural exploration and IP development well before volume manufacturing. This early access is increasingly valuable as design cycles lengthen and the cost of advanced-node development continues to rise.

From a broader industry perspective, NanoIC’s move highlights a growing trend toward specialized and differentiated PDK offerings. As advanced nodes become more complex, chipmakers are seeking partners that can provide deep process expertise and tailored design enablement rather than one-size-fits-all solutions. By delivering both logic and eDRAM PDKs at the A14 level, NanoIC demonstrates its ability to support heterogeneous integration and memory-centric architectures that define modern semiconductor innovation.

Bottom line: NanoIC’s extension of its PDK portfolio with its first A14 logic and eDRAM memory PDK represents a significant step forward for the company and its customers. The new offerings address the technical demands of advanced semiconductor design while enabling higher performance, greater integration, and improved power efficiency. As the industry continues to push the limits of scaling and system complexity, comprehensive PDK solutions like NanoIC’s will play a crucial role in turning ambitious chip concepts into manufacturable reality.

Also Read:

TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation

The Chronicle of TSMC CoWoS

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth


2026 Outlook with Coby Hanoch of Weebit Nano

2026 Outlook with Coby Hanoch of Weebit Nano
by Daniel Nenni on 02-03-2026 at 10:00 am

Coby Hanoch CEO Weebit embedded ReRAM RRAM NVM IP module Flash

Coby Hanoch is the CEO of Weebit Nano. Coby has nearly 45 years’ of experience in the semiconductor and related industries, including engineering, engineering management, sales, and executive roles. Coby was previously CEO at PacketLight Networks, and held VP Worldwide Sales roles at both Verisity and Jasper Design Automation.

Tell us a little bit about yourself and your company.

Weebit Nano is a provider of advanced non-volatile memory (NVM) IP. We develop and license our ReRAM technology to foundries, IDMs and product companies. I joined Weebit eight years ago, when we were effectively a startup with two engineers. Today, we are a production-ready ReRAM provider with over 60 employees, about a third of which are PhDs. We also have license agreements with onsemi, Texas Instruments, DB HiTek, and SkyWater, as well as several agreements with product companies.

What was the most exciting high point of 2025 for your company?

For me, the standout high point was the momentum behind commercial validation and our move into a genuinely production-ready position. The onsemi and TI agreements are good examples of that, because they combine manufacturing and product licensing in the relationships, which is rare and powerful.

On the technology side, it was also exciting to see our readiness for demanding use cases: we’ve qualified our ReRAM to AEC-Q100 at 150°C and demonstrated more than 100,000 endurance cycles, and we’ve proven the technology in silicon across 130nm, 65nm, 28nm and 22nm, and successfully simulated it on FinFET nodes as well.

What was the biggest challenge your company faced in 2025?

One of the biggest challenges is that the industry doesn’t switch memory technologies overnight. Embedded flash is still widely used because it’s familiar and deeply embedded into design flows, even though scaling and integration constraints are getting harder to ignore.

Another practical challenge is simply execution at scale: supporting multiple customers and multiple projects means building the processes, teams, and infrastructure to deliver consistently across foundry integration, product modules, and ongoing engineering support.

How is your company’s work addressing this challenge?

We address the natural adoption hesitance by proving we are ready for real production: licensing to credible partners, demonstrated silicon, and qualification work that matters to customers. We also help customers see the significant advantages ReRAM has in terms of low manufacturing cost, faster access time and lower power consumption.

We’re also strengthening our operating model around customer delivery: expanding our global sales and support footprint, streamlining and automating procedures including building a Customer Success team with deep fab experience so we can effectively run multiple customer programs.

What do you think the biggest growth area for 2026 will be, and why?

I expect many foundries and IDMs, with whom we have been talking for some time, to step forward and engage with us in licensing agreements. The demand for ReRAM is growing every day, and key players have already engaged with Weebit, so the perceived risk level is dropping significantly.

One of the biggest growth areas will be edge AI moving toward more integrated, monolithic designs where on-chip NVM becomes a real differentiator for cost, power, and security.

I also see strong growth in smarter automotive MCUs driven by electrification and ADAS, where reliability and advanced-node integration are increasingly important.

In reality, there are other compelling segments too, as almost every application needs embedded NVM, including more integrated analog and mixed-signal ICs (like smart PMICs), and high-reliability environments such as aerospace and LEO satellites.

How is your company’s work addressing this growth?

For edge AI, ReRAM is a strong fit because it enables keeping weights on-chip, which can reduce latency and power consumption, and improve security. ReRAM bits are also smaller than SRAM bits, supporting higher on-chip memory density and higher accuracy.

For automotive and harsh environments, we’ve focused on the reliability that customers need, including qualifying for AEC-Q100 at 150°C and endurance beyond 100,000 cycles. We are already demonstrating operation at higher temperatures.

For analog and mixed-signal designs, our ReRAM is back-end-of-line (BEOL), which makes it easier to integrate without compromising analog blocks, making these designs more efficient at lower manufacturing cost compared to flash.

Towards supporting growth in these markets and others, our focus in 2025 was setting up the infrastructure to support many big customers in parallel. As more and more companies move towards ReRAM as their embedded NVM of choice, our new Customer Success team will ensure the success of all the projects we engage in.

What conferences did you attend in 2025 and how was the traffic?

In general, we prioritize conferences where we can schedule high-quality meetings with foundries, IDMs, and SoC teams focused on embedded NVM, edge AI, and automotive. When the audience is concentrated, the traffic and meeting quality are strong. In 2025 we participated at Embedded World in Germany, CES in the USA, and numerous local shows around the globe. Our technologists also presented at conferences like CEA-LIST Tech Days, MPSoC’25, The Future of Memory and Storage (FMS) and the VLSI Symposium.

Will you participate in conferences in 2026? Same or more as 2026?

We expect to participate at a similar level or slightly more than 2026, focusing on events that concentrate on our target customers and convert into real project engagement.

How do customers normally engage with your company?

Customers typically engage with us in two ways. Foundries and IDMs license our technology for process integration and qualification, with license fees, NRE and support. Product companies license ReRAM modules for SoCs, again with license fees, possible NRE, and royalties once products go into production.

We also work with product companies on tailored ReRAM modules optimized to their chip, because foundries may not want to customize memory modules for each product. Companies can reach us through our website at www.weebit-nano.com.

Are you incorporating AI into your products?

We see AI as a major driver for ReRAM adoption, as ReRAM is well suited to edge AI and neuromorphic approaches. ReRAM offers significantly higher density than SRAM, enabling a larger number of model coefficients to be stored directly on chip, which in turn improves inference accuracy within the same silicon footprint. We have already validated AI inference using ReRAM in working silicon and are seeing increasing momentum around near-memory and in-memory computing approaches. And because each ReRAM cell can function analogously to a synapse, the technology aligns naturally with neuromorphic computing architectures.

To be clear, we’re primarily an embedded memory IP and licensing company, so the AI aspect is mostly about enabling our customers’ AI-capable chips.

Is AI affecting the way you develop your products?

We believe there are many ways in which AI can make us more efficient, as well as enable better analysis of the data we accumulate. To this end, we have engaged with a leading university AI professor who is reviewing our R&D processes and procedures and recommending how AI can help improve them.

Additional comments?

I believe we’re at a turning point for embedded non-volatile memory. ReRAM is increasingly validated in real silicon, and in the commitments made by key players. It scales where embedded flash does not, and it integrates in a way that aligns with where SoCs are heading. I expect 2026 to be the year where many companies, both fabs and product companies, make the move towards using ReRAM as their embedded NVM of choice.

CONTACT WEEBIT NANO

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


How Switzerland Built a Global Semiconductor Edge by Thinking Smaller

How Switzerland Built a Global Semiconductor Edge by Thinking Smaller
by Admin on 02-03-2026 at 8:00 am

Alain Serge Porret Headshot

By Alain-Serge Porret, Vice President, Integrated & Wireless Systems, CSEM

Since ramping up several years ago, the global semiconductor and artificial intelligence (AI) race has been driven by scale, from building larger data centers, developing bigger and more powerful models, and with them, increasingly complex and power-hungry chips. The race for scale saw global investments in AI reach $252.3 billion in 2024 alone, according to Stanford University; that would represent a 50% increase from the previous year. Within this ecosystem, the prevailing assumption in many instances has been that competitiveness is a function of attracting the largest investments and creating the most computing power as fast as possible.

This logic has fueled significant innovations in recent years, but it has also created an environment in which only a handful of nations and organizations have the resources needed to join the race and participate at the highest levels.

With a population of less than nine million, Switzerland is a prime example of a country aiming to carve its own lane in this tight market. Not a member of the EU and not benefiting from the vast stretches of land and natural resources found in powerhouses like the United States and China that can fuel data centers and large fabrication plants, researchers have instead taken a unique approach to securing a seat at the table. Instead of trying to outmuscle the bigger names, the Swiss have focused on outperforming in areas of efficiency, specialization and precision, with a focus on ultra-low-power semiconductor design.

This route is beginning to shift from effective to essential as the world begins to grapple with supplying the resources, from rare earths to energy, that are required to support today’s dominant theories around semiconductor design.

Chips and AI Heading Towards the Energy Wall

Over the past few years large AI models have grown considerably, and with that growth has come the need for more complex chips, bigger data centers, and subsequently more resources. Recent reports have predicted that in the United States alone, data centers could consume upwards of 68 billion gallons of water a year by 2028, with an estimated three percent of all electricity consumption around the world being tied to AI demands by just 2030.

The scale-centric trajectory of the industry is colliding with physical and economic limits, with power grids already stretched to their limits. Even leading companies that once championed aggressive scale are now looking how to properly size chips for the energy realities of today by incorporating efficiency measures, model compression, hardware specialization, and on-device intelligence to reduce costs and carbon footprint.

But in situations where you cannot scale up, you can always scale inwards, refining each part of the process, reducing unnecessary computation and optimizing to be more energy efficient. There are few nations doing this better than Switzerland.

By optimizing inward, chips are able to perform more complex tasks, such as in cases of face anonymization, driver monitoring, medical inference, condition monitoring, and much more, while only using a fraction of the energy that typical cloud-based AI pipelines require. Power demands are rising, and costs are rising with them. As they do, this level of optimization becomes central to the future of AI deployment.

Specialization Beats Scale When the Job Demands It

Switzerland’s approach is built on a growing recognition that general-purpose AI models are not always the most effective. The world’s largest AI models are extraordinary tools that are changing the way we work and live seemingly by the day. However, their breadth of function can come with tradeoffs, even beyond their high energy requirements. Oftentimes when we focus on becoming a jack of all trades, we naturally wind up being a master of none.

By contrast, Swiss research organizations and innovation centers, such as CSEM, have concentrated on tailored systems designed to excel at highly specialized functions. For example, work on custom Application-Specific Integrated Circuits (ASICs) has shown instances where specialized circuits can match, or even exceed, the performance of general-purpose processors, while only using a fraction of the energy. In other cases, domain-specific AI models, trained to recognize patterns in constrained environments, often outperform larger models when applied to targeted use cases.

While this focus limits potential applications, it homes in on core critical functions to perform specific tasks locally, reliably, securely and quickly. By understanding the constraints of one problem, engineers can develop solutions that utilize only enough computing power and energy needed to accomplish that one problem. Take, for instance, privacy-preserving AI systems that forget personal biometric data immediately after input, or driver-monitoring systems that need to run continuously, without affecting a vehicle’s battery performance. These challenges require intelligence that is small, local, and efficient, rather than a model that is attempting to accomplish everything at once, often within the cloud on a distant data center.

These specialized technologies serve as a complement, rather than a competition. This carves out a separate clean lane for nations and research institutions that do not benefit from billions of investment dollars or massive resources. Switzerland’s contribution to the global ecosystem is not to replace large-scale AI, but to supply the high-efficiency components that empower those with specific functions that need to be met in a highly-sustainably and precise manner.

Precision Engineering as a National Advantage

Switzerland’s success in this niche is not accidental or coincidental. It flows from a national ecosystem shaped by decades, if not centuries of precision, and an educational system that aims to perpetuate those skills for the digital age. From deep roots in watchmaking, to more modern advanced manufacturing, biomedical instrumentation and micro-electronics research, Switzerland has garnered a strong reputation for building devices built to perform highly specialized tasks flawlessly. These areas have aligned naturally with the needs of ultra-low-power and energy-efficient semiconductor design.

The country has also worked carefully to create a collaborative environment that emphasizes speed and agility, empowering them to quickly prototype and test specialized chips. Researchers and engineers work closely with industry partners, allowing concepts to move from lab to deployment quickly while maintaining high standards for performance and reliability. The emphasis on interdisciplinary interactions, which combines education, manufacturing, technology and research, enables a focused approach throughout the process.

Switzerland’s political neutrality and stable research funding environment also allow long-term projects to thrive. Rather than chasing short-term market cycles, institutions can invest in technologies with value creation horizons measured in years, rather than months. Engineers working on chip development in the country embody the national ethos by identifying strategic niches where precision, reliability, and efficiency matter more than size, and then excelling within those boundaries.

Looking Ahead: Why This Model Matters for the Future of AI and Chips

As the global AI and semiconductor ecosystems evolve at a breakneck pace, Switzerland’s approach is creating a blueprint for countries and regions looking for viable ways to contribute to the global chips race without matching the massive scale. The future will not be defined solely by the largest models or the biggest data centers. Instead, it will be important to keep in mind that:

  • Efficiency is a competitive advantage. As energy becomes a limiting factor, systems that deliver strong performance at low power will hold increasing value across sectors.
  • Specialization can outperform scale. Domain-specific intelligence and custom hardware will continue to offer superior performance for real-time, safety-critical, and privacy-sensitive applications.
  • Niche excellence strengthens the global ecosystem. Small, highly optimized models and chips can integrate with and enhance larger AI systems, enabling better performance at the system level than any single approach could achieve alone.

As the AI and semiconductor industries look to the next decade, those who can focus on more precise and sustainable approaches are ideally positioned to not only get a piece of the pie but also create a far stronger and adaptable AI and chips environment for the entire globe.

Also Read:

Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle

Agentic at the Edge in Automotive and Industry

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets