SILVACO 073125 Webinar 800x100

CEO Interview with Sudhanshu Misra of ChEmpower Corporation

CEO Interview with Sudhanshu Misra of ChEmpower Corporation
by Daniel Nenni on 05-16-2025 at 10:00 am

sudhanshu
Sudhanshu Misra is an experienced executive with over 25 years of experience in the semiconductor industry with a focus on advanced materials innovation and commercialization. He has held leadership-level roles with companies such as NthDegree, Entegris, Marubeni, and NexPlanar (a CMP pad company he co-founded) and technical roles at Bell Labs, Lucent Technologies, and Texas Instruments. His responsibilities have included overall business and financial performance, product development and innovation, trading, risk management, fundraising, and building successful start-ups.
Tell us about your company?

ChEmpower Corporation is an advanced materials and specialized chemistry company headquartered in Portland, Oregon. The company develops and supplies chemically reactive pads for planarization, offering an abrasive-free alternative to traditional polishing processes. ChEmpower is dedicated to eliminating abrasives from the polish process, improving chip yields, and advancing sustainability standards within the semiconductor industry.

Serving a range of semiconductor companies globally, ChEmpower is driving the future of semiconductor manufacturing, supporting the production of next-generation devices, and empowering innovation in high-performance technologies.

What problems are you solving?

A key enabling technology to chip manufacturing is planarization that is required to create flat surfaces to build the chip circuitry.

Our abrasive-free technology is revolutionary, providing a cleaner, more efficient, and sustainable method for achieving planarization. By eliminating the abrasive component, we significantly reduce the potential for defects and micro scratches, which are common issues with traditional CMP techniques. This reduction in defects not only improves chip yields but also enhances the overall performance and reliability of the semiconductor devices.

Moreover, our technology offers a cost-effective solution for chip manufacturers. Without the need for expensive abrasives, the operational costs are lowered, making the manufacturing process more economical. Additionally, our approach aligns with the growing emphasis on sustainability within the industry. By minimizing waste and reducing the environmental impact associated with traditional planarization methods, ChEmpower is setting new standards for eco-friendly manufacturing practices.

The potential market for our technology is vast. While the current market size for conventional CMP solutions is estimated to be around $3 billion, the opportunity for our innovative, abrasive-free technology is projected to be between $10 to $12 billion. This significant market potential is driven by the benefits of improved chip yield, enhanced sustainability, simplified process flows, and reduced costs.

What application areas are your strongest?

Our initial product is designed to enhance copper interconnects for integrated circuits (ICs) and advanced packaging applications. ChEmpower’s technology platform is versatile and can be adapted to other metals, including Molybdenum and Ruthenium, which are upcoming metals for next-generation chips. Additionally, our technology is applicable to silicon polishing and exotic materials such as glass and polymers, which require extremely smooth surface finishes in advanced packaging. By leveraging chemical action during the Chemical Mechanical Planarization (CMP) process, ChEmpower’s technology ensures a more systematic, predictable, and controllable approach.

What keeps your customers up at night?

Yields and precise process control. Scalability to new materials and future advances are crucial concerns for our customers. They need solutions that not only address current manufacturing challenges but also adapt to the evolving demands of the semiconductor industry. Our technology provides the precision and control required to process these new materials, ensuring that manufacturers can meet stringent requirements for AI chips and HBMs.

What does the competitive landscape look like and how do you differentiate?

The competitive landscape is dominated by chemical companies on the polish slurry side while material companies on the polish pad side. The slurry market is highly fragmented with numerous chemical companies such as Fujifilm, Entegris, EMD and Fujimi to name a few. On the pad side, the dominant player is Dupont followed by Entegris, a distinct second.

What differentiates ChEmpower is that an abrasive-free technology requires innovation at both the materials and the chemicals side, and none of the chemical or material companies have it nor the motive without jeopardizing their existing businesses. Chempower has the unique technology platform that amalgamates both chemistry and material to create an abrasive-free technology. ChEmpower technology eliminates random defects, lowers operating costs, and enables sustainability. ChEmpower’s technology platform is differentiated and unique from any player in the market that none can even foreseeably attempt to compete with ChEmpower.

What new features/technology are you working on?

ChEmpower is targeting sub 10nm technology nodes with a first product launch for copper this year. We already have a silicon product in development that will be launched early next year. Our next product is for molybdenum followed by ruthenium interconnects. As you can see, ours is a technology platform that is chemistry driven, and we envision this to extend our product offerings to both metals and non-metals. Our first product for copper is well suited for hybrid bonding in advanced packaging applications. Other substrate polishes in advanced packaging are also attractive targets for abrasive-free ChEmpower technology.

How do customers normally engage with your company?

Customer engagement is a critical element in setting up the right expectations with our product.  ChEmpower has collaborative engagements with alpha customers, OEMs (equipment manufacturers) and strategic partners to establish the requirements of the customers and thus the solution we provide with our technology.  Our immediate approach is to engage both, directly and via partners, with customers in our pre-sales phase all the way through qualification, that typically can take 12-18 months.  We will also insert ourselves into the R&D phase early on for the newer advanced technology nodes that are opportunities for us to be adopted right from the onset of those technology nodes.  Our partnership strategy will be critical to our go-to market strategy as well as customer support that enables us a surgical approach to gain customer interest.

We will continue to deepen our engagement with customers on a broad basis to ensure we have the right market intelligence to drive alignment with their technology roadmaps.  For this effort, we will work closely with our strategic sales partners to drive internal development strategies with a laser focus to target specific customers and applications.

Also Read:

CEO Interview with Thar Casey of AmberSemi

CEO Interview with Ido Bukspan of Pliops

CEO Interview with Roger Cummings of PEAK:AIO


Podcast EP287: Advancing Hardware Security Verification and Assurance with Andreas Kuehlmann

Podcast EP287: Advancing Hardware Security Verification and Assurance with Andreas Kuehlmann
by Daniel Nenni on 05-16-2025 at 10:00 am

Dan is joined by Dr. Andreas Kuehlmann, Executive Chairman and CEO at Cycuity. He has spent his career across the fields of semiconductor design, software development, and cybersecurity. Prior to joining Cycuity, he helped build a market-leading software security business as head of engineering at Coverity which was acquired by Synopsys. He also worked at IBM Research and Cadence Design Systems, where he made influential contributions to hardware verification. Andreas also served as an adjunct professor at UC Berkeley’s Department of Engineering and Computer Science for 14 years.

Dan explores the growing and maturing field of hardware security with Andreas, who provides relevant analogies of how the software security sector has developed and what will likely occur in hardware security. Andreas describes the types of security checks and enhancements needed for chip hardware and suggests ways for organizations to begin modifying workflows to address the coming requirements. The work going on in the industry to develop metrics to characterize various threats is also discussed, along with an overview of the hardware security verification offered by Cycuity.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Contact Cycuity


Video EP5: A Discussion of Photonic Design Challenges and Solutions with Keysight

Video EP5: A Discussion of Photonic Design Challenges and Solutions with Keysight
by Daniel Nenni on 05-16-2025 at 6:00 am

In this episode of the Semiconductor Insiders video series,  Dan is joined by Muhammad Umar Khan, Product Manager for Photonics Design Automation Solutions at Keysight Technologies. Muhammad describes the primary design challenges of photonic design with Dan. These include accurate models and a uniform methodology for chip and photonic design. The benefits of a hybrid digital twin model that combines simulation and fabrication data is discussed, along with the overall impact of the Keysight Photonic Designer solution.

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Contact Keysight


CEO Interview with Thar Casey of AmberSemi

CEO Interview with Thar Casey of AmberSemi
by Daniel Nenni on 05-16-2025 at 6:00 am

AmberSemi CEO Thar Casey

Thar Casey is a serial entrepreneur, focused on disruptive, game changing technology architecture. Today, he is the CEO of AmberSemi, a young California-based fabless semiconductor company advancing next-generation power management (conversion, control and protection) that revolutionizes electrical products and semiconductor device performance, energy efficiency and reliability. AmberSemi is delivering a generational architecture upgrade to power infrastructure globally.

Tell us about your company?

AmberSemi is a fabless semiconductor company developing next-generation power conversion, control and protection solutions. Its patented innovations aim to improve power management across industries, including data centers, networking, telecom, industrial manufacturing, and consumer electronics. AmberSemi develops power architectures that move beyond traditional passive-analog technologies, incorporating embedded intelligence for greater safety and reliability. Headquartered in Dublin, California, the company is a member of the National Electrical Manufacturers Association and the Global Semiconductor Alliance. AmberSemi has received industry recognition, including Time Magazine’s Best Inventions (2021), Fast Company’s Next Big Thing in Tech (2022), the Edison Award for Innovation (2023 Gold & 2024 Silver), recognition as one of the top four Global Semiconductor Alliances’ Startups to Watch (2023) and a spot on EE Times’ 100 Startups Worth Watching (2024).

With 49 granted US patents and over 50 pending, the company’s breakthroughs in digital control of electricity yielded two core technologies:

Next-gen power conversion enabled by an active architecture that utilizes digital control and precision sensing, delivering superior conversion efficiency, power density, and performance.

A first of its kind power switching and protection enabled by direct digital control and precision sensing, delivering intelligent power control and faster protection for higher reliability and better uptime.

These technologies are being productized to upgrade power architecture across trillions of devices globally and enable disruptive “killer app” enhancements to AI, EV, data center power and more.

The company’s first two commercial semiconductor lines for AC-to-DC Conversion and Switch Control and Protection will be available later this year and next, respectively.

What problems are you solving?

AI-data centers are a quintessential example of the problem with today’s electrical architecture. The industry is driven by the development and use of language learning models, and, by driving more advanced AI processing from advanced GPU/CPU chips, they can do this processing faster and more effectively than ever before. The challenge is that these new chips require much more power. Generation of power is a major issue and many new technologies are being developed to fix this through new sources of power generation, such as nuclear, fusion, fuel-cell, etc. This focus is pushing hyperscalers to build new facilities that better secure the energy needed to power their processors. Yet, the current power conversion architecture in every server is highly inefficient, like a leaky pipe that never gives the water pressure you want.

Industries like data centers, telecommunications, and networking require advanced power management to keep pace with technological developments, like AI. Current power architectures are unable to scale efficiently to meet the increasing power demands of these advanced processors, and these issues further multiply as power demand increases. Amber’s unique approach incorporates semiconductor physics, materials science, and advanced packaging to solve these issues in a revolutionary approach to power delivery.

AmberSemi’s announcement of 50VDC-to-0.8VDC conversion for AI data centers re AI data centers moves conversion steps, reducing inefficiency by half, adding 9% more power from the rack directly to the AI-chip, saving $4 billion annually. (Each 1% efficiency improvement is half a billion dollars in savings in the US and a reduction of 1.7M tons CO2.)

As industries move toward more modern fully solid-state power systems and smarter electrical integration, AmberSemi’s innovations sit at the center of this transition, contributing to a more sustainable, higher-performance electrical infrastructure.

What application areas are your strongest?

AmberSemi’s breakthroughs in power management strongly serve multiple sectors, including data centers, industrial, automotive, commercial/residential building applications and more.

Data Centers / Industrial Protections: Data centers and industrial factories benefit from digitized, more intelligent power. Data centers or factory operating downtime triggered by transient events, like short circuits, is a major concern. A short circuit event in one location cascades down the line, tripping all breakers, taking down the system and costing thousands to millions of dollars per hour. AmberSemi’s extremely fast (3,000x faster than today’s standard) arc-free solid-state switching control and protection platform isolates the event to a single, initial circuit breaker/motor location, directly stopping cascade effects.

AmberSemi’s breakthrough technologies are being applied to solve an even more critical problem today: the accelerating power crisis driven by AI-datacenters’ power demands.  Amber’s power conversion solutions remove conversion steps from the data center wall to /through the rack down to the CPU/GPU. By removing steps, AmberSemi lifts efficiency rates dramatically – an estimated 50% improvement over current losses. Most importantly, AmberSemi will deliver increases in power density enabling more AI computing.

Commercial/Residential: Buildings can integrate smart circuit breakers and connected electrical systems and products to improve safety, efficiency, and significant IoT feature additions. These Amber-powered products support access control, fire control, smart building automation and security systems, ensuring more than just reliable power management, but enhanced environmental and energy awareness and more. Essentially, buildings gain a network of intelligent monitoring points powered by Amber, enhancing security, automation management and operational oversight.

What keeps your customers up at night?

Often internal innovation at established companies is limited by headwinds from the current business priorities, such as existing technologies,  and models with established financials.

Our customers are hungry for new innovation from technology providers, such as semiconductor companies to provide them with solutions to utilize. If innovation is limited due to legacy approaches to problems there won’t be revolutionary leaps forward, which is what the industry desperately seeks. Amber’s out of the box thinking that is unencumbered by history is working with these innovative and leading-edge customers to deliver the solutions they need.

AmberSemi’s disruptive opportunity is to deliver generational power architecture innovations that modernize legacy old, established power systems:

Protection Products: Semiconductors provide tremendous benefits, yet the challenge is making the transition from manufacturing mechanical products to PCB based electronic products. Amber’s new solid-state power architecture simplifies this development effort for companies, dramatically through a single, flexible IC that acts as the heart for these systems when it comes to sensing and control without compromising their own core IP.

Data-Center Products: The industry seeks more AI-computing from CPUs/GPUs in data centers; yet current power infrastructure is highly inefficient creating cooling issues in current servers. Simultaneously AI-GPU/CPU manufacturers are working on faster processing, but the new chips power demand aren’t supported by existing infrastructure.

Amber solves these issues through its high-density vertical power delivery solution that drastically decreases wasted energy, while increasing power available to Ai-chips for more computing.

What does the competitive landscape look like and how do you differentiate?

Protection Products: There is no direct competition for what Amber is providing to this market – semiconductor-based switch control and protection.

The market currently is focused around electromechanical architectures, but there is already a shift towards using semiconductors instead. Amber’s products don’t compete with any product directly, as they are purpose-built devices for specific markets and applications and simplify / fix some of the native challenges of standard parts, such as a microcontroller.

In fact, Amber is partnering with major semiconductor companies that manufacture power devices such as Si / SiC MOSFETs, IGBTs and more. We provide a strategic pathway for them to increase sales  in general and tap into new blue ocean markets.

Data Centers: We are currently at the limit of today’s power delivery architecture due to fundamental issues, such as inefficiencies from pushing so much current across the server board. Vertical power delivery is the only viable solution recognized by the market to meet the ever-increasing power demand.

Currently there are no vertical solutions available on the market. While we see some young companies investigating these opportunities and developing new technologies, from what we can see and hear from our customers, there are no complete solutions on the horizon from these companies – except from AmberSemi.

Amber is developing a very disruptive complete 50V to 0.8V solution with vertical power delivery that aims to transform both power efficiency from the rack to the load and deliver the power directly to the CPU/GPU for substantially more AI computing.

What new features/technology are you working on?

AmberSemi recently announced our new third product line, which is now under development, targeting enhanced power efficiency and power density for AI data centers. The category is one of the hottest – if not the hottest – tech categories today, due to the demand for more AI computing.  Yet, the inherent challenges to supply enough power within data centers, limits AI computing. CPUs and GPUs in AI data center server racks require from 20x to as much as 40x the power for their high-performance computing, which current data centers can’t provide.  Getting to more AI computing today essentially means building new data centers, as retrofittable solutions for current data centers are not available.

AmberSemi’s complete solution to convert high voltage DC to low voltage DC vertically to the processors solves the current efficiency issues and provides a scalable path for the future. We enhance efficiency at the server rack level that translates to increased power density or more power directly to the chips enabling more AI computing.

The announcement has garnered significant market attention from the largest tech players on earth, because of the very disruptive nature of this recent breakthrough. In addition, this comes on the heels of the successful validation and commercialization of Amber’s first two product lines: dramatically smaller AC-DC conversion and the first of its kind solid-state switch control and protection. Both will be available on the market this year and next respectively.

How do customers normally engage with your company?

The Amber team’s significant experience in a range of key markets helps identify weakness in the current semiconductor offering on the market and opportunities in the range of electrical products applications. Amber uses this expertise to define how our parts perform and how best to simplify and improve designs. We work directly with our customers on these requirements in developing our solutions to deliver significant improvements in their products. This process allows them to differentiate and improve their products versus current offerings and better protect their IP.

Today, AmberSemi has a pool of 50 or so Alpha / Beta customers who are actively engaged with us, have tested and validated our technologies or plan to in the near future.

Contact AmberSemi

Also Read:

CEO Interview with Sudhanshu Misra of ChEmpower Corporation

CEO Interview with Ido Bukspan of Pliops

CEO Interview with Roger Cummings of PEAK:AIO


EDA AI agents will come in three waves and usher us into the next era of electronic design

EDA AI agents will come in three waves and usher us into the next era of electronic design
by Admin on 05-15-2025 at 10:00 am

Image 1

Author: Niranjan Sitapure, AI Product Manager, Siemens EDA

We are at a pivotal point in Electronic Design Automation (EDA), as the semiconductors and PCB systems that underpin critical technologies, such as AI, 5G, autonomous systems, and edge computing, grow increasingly complex. The traditional EDA workflow, which includes architecture design, RTL coding, simulation, layout, verification, and sign-off, is fragmented, causing error-prone handoffs, delayed communication loops, and other roadblocks. These inefficiencies prolong cycles, drive up costs, and intensify pressure on limited engineering talent. The industry urgently needs intelligent, automated, parallelized EDA workflows to overcome these challenges to keep up with increasingly aggressive market demands.

AI as the Critical Enabler of Next-Gen EDA

AI is becoming essential across industries, and EDA is no exception. Historically, Machine Learning (ML) and Reinforcement Learning (RL) have enhanced tasks like layout optimization, simulation acceleration, macro placement, and others, in addition to leveraging GPUs for performance boosts. Generative AI has recently emerged, enhancing code generation, test bench synthesis, and documentation, yet still requiring significant human oversight. True transformation in EDA demands more autonomous AI solutions capable of reasoning, acting, and iterating independently. Fully autonomous AI agents in EDA will evolve progressively in waves, each building upon prior innovations and demanding multi-domain expertise. We have identified three distinct waves of EDA AI agents, each unlocking greater autonomy and transformative capabilities.

Wave 1: Task-specific AI agents

The first wave of AI-powered transformation in EDA introduces task-specific AI agents designed to manage repetitive or technically demanding tasks within the workflow. Imagine an AI agent acting as a log file analyst, capable of scanning vast amounts of verification data, summarizing results, and suggesting corrective actions. Or consider a routing assistant that works alongside designers to optimize floor plans and layout constraints iteratively. These agents are tightly integrated with existing EDA tools and often leverage large language models (LLMs) to interpret instructions, standard operating protocols, and other comments in natural language. Due to their specialized nature, each phase in the EDA workflow might require multiple such agents. Consequently, orchestration becomes key: a human engineer oversees this fleet of AI agents and must intervene frequently to guide these specialized agents.

Wave 2: Increasingly autonomous agentic AI

Wave two introduces agentic AI, which are solutions that are no longer just reactive but proactive, capable of sophisticated reasoning, planning, and self-improvement. These differ from AI agents, which specialize in a specific task. Agentic AI solutions can handle entire EDA phases independently. For example, an agentic AI assigned to physical verification can conduct DRC and LVS checks, identify violations, and autonomously correct them by modifying the design layers—all with minimal human input. These Agentic AI solutions also communicate with one another, passing along design changes, iterating in real time, and aligning the downstream steps accordingly, making these solutions even more powerful as they enable an iterative improvement cycle.

Furthermore, Agentic AI solutions can deploy multiple task-specific AI agents described in Wave 1, thereby not requiring fine-tuning for every specific task and instead requiring fine-tuning of reasoning, planning, reflection, and orchestration capabilities. An EDA engineer or a purpose-built supervisory agentic AI typically oversees this orchestration, adapting the workflow, resolving conflicts, and optimizing outputs. Essentially, imagine a 24/7 design assistant that never sleeps, constantly refining designs for performance, power, and area. This is not a future vision—it is a near-term possibility.

Wave 3: Collective power of multiple AI agents 

The third wave isn’t just about making individual AI agents or an agentic AI solution necessarily smarter—it’s about scaling this intelligence. A powerful multiplying effect is unlocked when we move from a 1:1 relationship between engineer and AI solution to a 1:10 or even 1:100 ratio. Imagine, instead of relying on a single instance of an AI agent or even an agentic AI to solve a problem, you could deploy dozens or hundreds of agents, each exploring different architectures, optimizations, or verification strategies in parallel? Each instance follows its plan, guided by different trade-offs (e.g., power vs. area in PPA optimization). At this stage, the human role evolves from direct executor to strategic supervisor, reviewing outcomes, selecting the optimal solution, or suggesting fundamentally different design approaches. The result is exponential acceleration in both new product innovation and faster design cycles in each of the new as well as products. Problems that took weeks to debug can now be explored from multiple angles in hours. Design ideas previously dismissed due to resource constraints can now be pursued in parallel, uncovering groundbreaking opportunities that would have otherwise remained largely undiscovered.

Both individuals and organizations will reap the transformative benefits of AI

As we enter this new EDA era, agentic solutions are going to shape how individuals and organizations work. At the individual level, chip designers gain productivity as AI takes over repetitive tasks, freeing them to focus on creative problem-solving. Micro-teams of AI agents will enable rapid exploration of multiple design scenarios, uncovering superior designs and speeding up tape-outs, thereby augmenting human expertise with faster AI-led execution. At an organizational level, AI-driven EDA solutions reduce time-to-market, accelerate innovation, and foster rapid development of cutting-edge products. More importantly, AI democratizes expertise across teams and the entire organization, ensuring consistent, high-quality designs regardless of individual experience, enhancing competitiveness. In conclusion, EDA AI will transform workflows in the next few years, greatly boosting productivity and innovation. Discover how Siemens EDA’s AI-powered portfolio can help you transform your workflow HERE.

Also Read:

Safeguard power domain compatibility by finding missing level shifters

Metal fill extraction: Breaking the speed-accuracy tradeoff

Siemens Describes its System-Level Prototyping and Planning Cockpit


Safeguard power domain compatibility by finding missing level shifters

Safeguard power domain compatibility by finding missing level shifters
by Admin on 05-14-2025 at 10:00 am

fig1 missing level shifters

In the realm of mixed signal design for integrated circuits (ICs), level shifters play a critical role for interfacing circuits that operate at different voltage levels. A level shifter converts signal from one voltage level to another, ensuring compatibility between components. Figure 1 illustrates a missing level shifter between power domains.

Figure 1. Missing level shifter between two power domains is a common mistake that is being encountered in the analog-digital mixed signal design.

It’s not uncommon for a member of the design team to miss a level shifter while designing a largeIC. . Although a seemingly obvious mistake, there are a plethora of pragmatic factors that may contribute towards a mistake like this in a large complex design. Some are them are assumption of voltage levels, lack of documentation of specifications, time constraints, complexity of design, inexperienced designers, inadequate simulation and testing and using a previous design as reference.

Depending upon the design team and company size, and the time constraints and resources available for a particular project, one or more of these checkpoints may or may not be available. If absent, the likelihood of design mistakes to inadvertently slip through before fabrication significantly increases.

The difficulty of accurately identifying the need for a missing level shifter, despite all kinds of checkpoints, presents an opportunity for the electronic design automation (EDA) industry to provide a more robust solution that can avoid these all too common human mistakes.

Consequences of a  missing level shifter

A missing level shifter in an IC design can have profound consequences that may completely compromise the integrity, performance and even power consumption of the design. Some examples of the consequences that may occurs due to missing level shifter somewhere in the design are:

  • Signal integrity issues
  • Damage to devices
  • Increased power consumption/leakage
  • Reduce performance
  • Compatibility issues
  • Noise sensitivity

Figure 2 shows a diagram of an IP with two power domains. When both power domains are powered-up and the voltage difference between the power domains is less than a threshold value (which you can set in the simulator), there are no issues in the design. However, if the voltage difference is greater than the threshold value, let us say Power domain 2 is operating at a higher-voltage level than Power domain 1, then a logic ‘1’ at Power domain 1 can be assumed a logic ‘0’ at Power domain 2. This leads to incorrect data transmission.

Figure 2: A block level diagram of an IP with multiple power domains.
An optimal solution for identifying inter-domain power leakage

The first step towards finding an inter-domain power leakage is finding all DC paths between all power domains. This step is immensely helpful and will list out all possible combinations of DC paths possible between two or more power domains. This step finds every single combination of paths that may exist between two or more power domains. Fortunately, this first step of finding all DC paths is relatively easy.

After finding all DC paths and identify crossings between different power domains, the tool performs a sophisticated evaluation to determine whether these crossings actually require level shifters. This complex analysis demands a comprehensive understanding of the circuit’s architecture and power domain interactions.

The determination of missing level shifters requires detailed voltage analysis that can only be accomplished through advanced EDA tools. These tools examine critical voltage relationships, specifically the gate-to-source voltage in PMOS devices and gate-to-drain voltage in NMOS devices. The measured voltages are compared against expected values, which are derived from the respective power and ground rail specifications for each domain. When these voltages deviate from expected values, it signals a potential requirement for level shifting.

Siemens EDA offers an innovative tool called Insight Analyzer that can quickly and accurately identify the risk of missing level shifters between power domains, as well as many other circuit reliability issues that are not easily identified using simulation or traditional electrical rule checking tools. Insight Analyzer uses a form of state-based analysis at the pre-layout stage without the need for simulation, so designers can perform early design analysis during the development process. This early, shift-left, analysis makes the design process more efficient, saving time and money.

Conclusion

As semiconductor technology reaches new heights and chip designs are becoming more complicated, with multiple power domains and multiple power states, the risk of missing a level shifter by the designer becomes all too real. The risk of not catching these design elements until much later in the design phase or even post fabrication is at all-time high, with there always being a complicated sets of instances and circumstances which may lead to designer missing a level shifter in the early design phase. The Insight Analyzer tool provides circuit designers with a designer- driven reliability verification flow, capable of running full-chip, transistor-level circuit analysis. Using Insight Analyzer enables circuit designers to improve their design reliability in today’s fast paced environment. You can learn more in my new technical paper Finding missing level shifters between power domains with Insight Analyzer.

About the author, Bhanu Pandey

Bhanu Pandey is a product engineer for Calibre Design Solutions at Siemens Digital Industries Software, with responsibility for the Insight Analyzer circuit reliability analysis tool. His focus and technical experience are in analog circuits. Prior to joining Siemens, Bhanu worked as an analog design engineer. He received his Master of Science degree in Electrical and computer Engineering from Georgia Institute of Technology.

Also Read:

Metal fill extraction: Breaking the speed-accuracy tradeoff

Siemens Describes its System-Level Prototyping and Planning Cockpit

Verifying Leakage Across Power Domains


A Timely Update on Secure-IC

A Timely Update on Secure-IC
by Bernard Murphy on 05-14-2025 at 6:00 am

Semi value chain

I last wrote about Secure IC back in 2023, a provider of embedded security technologies and services. Cadence announced at the beginning of 2025 their intention to acquire this company, which warrants a check-in again on what they have to offer. Secure-IC addresses multiple markets, from automotive, through defense/space and more. The central value is security management all the way through a product lifecycle, from initial design through manufacturing, provisioning, mission mode and ultimate decommissioning.

First, where are markets at on security?

We all understand that security is important, but just how important? According to Karthik Raj Shekar (FAE lead and Project Manager at Secure-IC), active engagement is accelerating. Where once security features were a checkmark, now they are must-have in automobile, mobile, server applications, defense (of course), smart cards and payment apps.

That demand is evolving is not surprising. Take automotive where new capabilities create new potential attack surfaces, through telematics or V2X for example. Even more critically, over-the-air updates with an ability to change core software demand high levels of protection.

What I found interesting is that pressure to get serious about security is being driven top-down. OEMs are being held to must-comply security standards, either regulatory or though guidelines/ad-hoc requirements. They push these down their supply chains not as checkmarks but as specific compliance expectations. Now Karthik sees chip makers becoming very proactive, anticipating where they will need to be versus coming security demands across a broad swath of markets. Which leaves me wondering which products can still ignore security. $60 porch-pirate cameras perhaps, but not security cameras providing guarantees. Cheap drones, but not high-end drones. Opportunities for non-compliant chips will still exist, but likely not in the big $$ markets.

Secure-IC solutions

The company provides a comprehensive palette of solutions, from embedded hardware and software which can be built into your chip design, to server solutions running in the cloud for fleet management, to security evaluation tools from design through post-silicon, to side-channel and fault injection vulnerabilities as a service provided by Secure-IC experts.

The design solution starts with a root of trust they call Securyzr, providing a very broad set of security services. These include of course attestation, key management, crypto options (including post-quantum options), secure connectivity, secure boot and trojan detection. Also some wrinkles I haven’t heard of elsewhere: ability to do post-silicon trimming for sensors (which can be controlled from the cloud-based server), and an AI agent embedded in the device software to reduce false alarms and ensure only important information is sent to the cloud server.

The cloud server is an integral part of the complete solution, allowing you to manage the security of a fleet of products. Here you can control provisioning (assigning keys to newly commissioned products, secure firmware update over the air and extensive monitoring options. As noted above, you can monitor and tune sensors, even turn malfunctioning sensors off to adapt to change conditions in the field and among the fleet. Here integrated device security and cloud-based management makes sense. I’m not sure how a standalone cloud security platform could manage down to sensor-level tuning. Maybe at some point. One more important point – they also provide support for Product Security Incident Report Teams (PSIRT). Security is a dynamic domain as we all see in regular OS and product update requests. PSIRT support helps OEMs stay on top of latest zero and one-day threats for their own products. Ultimately when you want to take a product out of service, the cloud service will support decommissioning, ensuring that expired credentials cannot be hijacked by a threat actor to pose as a legitimate member of the fleet.

If you are selling into security-sensitive fields ultimately you will need to prove compliance through an authorized neutral lab. Getting ready for such testing can consume significant expert time and effort. Security-IC tracks relevant standards very closely: PSA, Autosar MCAL, TPM2.0, PKCS#11, Common Criteria, FIPS, etc, and can provide expert assistance and tools to do gap analysis on your design against the appropriate requirements for your target markets. They will also (optionally) help with side-channel and fault insertion analysis, both areas demanding high expertise to track down weaknesses.

Altogether this looks like a very comprehensive suite of solutions. You can learn more about Secure-IC HERE.

Certification for Post-Quantum Cryptography gaining momentum

Facing challenges of implementing Post-Quantum Cryptography

Secure-IC Presents AI-Powered Cybersecurity


The Journey of Interface Protocols: The Evolution of Interface Protocols – Part 1 of 2

The Journey of Interface Protocols: The Evolution of Interface Protocols – Part 1 of 2
by Lauro Rizzatti on 05-13-2025 at 10:00 am

The journey of interface protocols part 1 table 1

Prolog – Interface Protocols: Achilles’ Heels in Today’s State-of-the-art SOCs

June 30 was only a week away when Varun had a sleepless night. The call from the datacenter manager the evening before alerted him on a potential problem with the training of a new Generative AI model. Six months earlier Varun’s employer installed the latest generation of a leading-edge ML accelerator that promised to cut the training time of the largest GenAI models by 50% via stellar processing bandwidth and reduced latency. Previously, training the largest LLMs, boasting over a trillion parameters, on the then state-of-the-art accelerators took approximately one year to achieve a satisfactory level of confidence. This process involved leveraging high-quality training data to minimize hallucinations. On paper all was perfect if not for a little secret unknown to most: the loss of even a single data packet during training could necessitate retraining the entire AI model. The secret became a mantra in the Gen AI community: “Start and Pray.”

The grim scenario described above is just one of many with potentially catastrophic consequences unfolding across various cutting-edge industries today. From autonomous vehicles making split-second life-or-death decisions to AI-driven medical systems managing critical diagnoses, the stakes have never been higher. Yet, these industries share a disturbingly common vulnerability: malfunctions in interface protocols.

Part 1 of 2. The Evolution of Interface Protocols: From Supporting Components to Critical Elements in Modern SoC Designs

Part 1 presents a comprehensive overview of the evolution of Interface Protocols, tracing their journey from auxiliary support components to indispensable pillars in cutting-edge HPC SoC designs. These protocols now underpin not only everyday technologies but also mission-critical, complex AI applications. The section explores the key drivers behind the rapid advancement and frequent upgrades of existing protocols, as well as the innovation fueling the development of entirely new standards.

Brief Historical Perspective of Interface Protocols

Interface protocols have their origins in the early development of computer systems, dating back to the 1950s and 1960s, when computers transitioned from isolated, monolithic machines to interconnected systems. For most of the time since their inception, the design of Interface Protocols has been driven primarily by the need for connectivity between different components of a computer system or between different systems themselves. As the electronic industry expanded, Interface Protocols facilitated the interoperability of components sourced from different vendors.

Over the past decade, the landscape of Interface Protocols has undergone a profound transformation. Technological advancements have created new demands for higher performance, shorter latency, greater power efficiency, improved reliability, and enhanced security, all of which have driven significant changes in interface protocol design and development. These evolving requirements are now central to a wide range of applications, from consumer electronics to industrial systems, automotive, and more.

As a result, modern System-on-Chip (SoC) development priorities have shifted dramatically. Performance is no longer the sole focus; energy efficiency, data integrity, and robust security are equally critical.

See Sidebar: Seven Decades of Interface Protocols Development.

Key Drivers Shaping the Evolution of Modern Interface Protocols

The evolution of modern interface protocols is shaped by two major, yet opposing, industry trends. On one hand, the rapid growth of software development has shifted System-on-Chip (SoC) functionality from hardware-centric implementations to software-defined solutions. On the other hand, the meteoric rise of artificial intelligence (AI)—especially predictive and generative AI—has reintroduced heavy compute and data manipulation demands, moving the focus back to SoC hardware.

Software’s Dominance: Transforming SoC Development

In his influential 2011 article, “Why Software Is Eating the World,” Marc Andreessen predicted the transformative power of software across industries. Over the past decade, this vision has materialized, driving profound changes in SoC design.

The shift to software-defined solutions has revolutionized the development process by offering greater flexibility, faster time-to-market, and simplified post-release updates. Developers can now enhance and scale SoC functionality without requiring costly hardware redesigns. These advantages have significantly streamlined the SoC lifecycle, enabling rapid responses to changing market demands.

However, this transition has brought its own set of challenges:

  1. Data-Intensive Operations: Software’s growing reliance on large datasets demands substantial memory capacity.
  2. Energy Consumption: The continuous transfer of data between memory, processing elements, and interfaces consumes significant power.
AI’s Impact: Redefining SoC Hardware Requirements

The rise of AI has compounded these challenges while introducing a third. Predictive and generative AI applications require processing engines capable of handling massive data loads with minimal latency.

Traditional CPUs often fall short, as their architecture struggles with the bottlenecks of data movement between memory and compute units. To address these demands, the industry has embraced GPUs, FPGAs, and specialized AI accelerators, which excel at handling high-throughput workloads.

Yet even the most advanced processors face limitations if data delivery speeds cannot keep up. When memory and I/O protocols lag, high-performance processing units risk underutilization, idling while waiting for data. This highlights the critical importance of modernizing interface protocols to meet AI’s escalating data demands and fully leverage advanced SoC capabilities.

Implications on Interface Protocols by Key Industry Trends

As AI and software continue to drive innovation, balancing these opposing trends will require advances in memory and I/O protocols, leading to the rapid evolution of existing protocols and the emergence of new protocols.

Implication on Complexity

As modern SoCs designs have grown in complexity, the embedded Interface Protocols that interconnect them have also progressed at an extraordinary pace.

High-performance protocols, such as the latest iterations of PCIe—from Gen 4 to the cutting-edge Gen 7—have evolved into highly complex systems. The PCIe Gen 7 specifications alone now encompass over 2,000 pages, underscoring the complexity needed to enable advanced functionality. Furthermore, implementation complexity continues to escalate as data transmission speeds push the physical limits of what is achievable, challenging both design and manufacturing processes.

Implication on Performance

Cloud infrastructures, AI algorithms, and generative AI applications are fueling an unprecedented demand for data, both in volume and in processing power. This surge drives the need for massive memory capacities, higher communication bandwidth, lower latency, and significantly enhanced throughput.

Traditionally, achieving faster data transfer rates in new protocol generations was accomplished by physically positioning on-chip components closer together. However, modern protocols must now support connections over longer distances, where both bandwidth and latency become critical challenges.

Evolution of Existing Interface Protocols and Emergence of New Interface Protocols

In the fast-changing landscapes of AI, machine learning, and big data analytics, established protocols such as PCIe, Ethernet, and memory interfaces have undergone significant evolution to meet the growing demands for larger capacity and higher performance. As AI software workloads generate vast amounts of data, traditional data transfer mechanisms have faced challenges in keeping pace, resulting in inefficiencies affecting processing power, latencies and power consumption. Research highlights that moving data via DRAM consumes up to three orders of magnitude more energy than performing arithmetic operations on the data, making memory-related power consumption a critical bottleneck in high-performance computing environments.

The surge in demand for memory bandwidth and capacity has begun to exceed the capabilities of existing protocols. Consequently, well-established technologies like PCIe have had to evolve continuously, leading to innovations such as PCIe 7.0 and beyond. Meanwhile, new solutions like Compute Express Link (CXL) have emerged to address these limitations, offering greater flexibility in how memory and accelerators are connected. CXL enables cache coherency and shared memory resources across CPUs, GPUs, and other accelerators, enhancing efficiency and cost for workloads like AI inference and data analytics.

Simultaneously, multi-die architectures, which integrate multiple dies or chiplets within a single package, have introduced transformative improvements in data movement between processing units. By bringing these dies physically closer together, communication between them becomes faster and more power-efficient, significantly reducing latency.

Evolution of Existing Protocols

Existing protocols have been evolving at an increasingly rapid pace. One prominent example is PCIe, which has now advanced to PCIe Gen 7, set for release in 2025. Even as Gen 7 approaches, the PCI-SIG (Peripheral Component Interconnect Special Interest Group) is already discussing the specifications for PCIe Gen 8, highlighting the urgency of keeping up with growing performance needs. See Table I

TABLE I: Bandwidth and latency specification of seven generations of PCIe. (*) PCIe 7.0 not released yet, specifications are estimated.

The evolution of Ethernet has been even more dramatic. Ethernet standards, particularly those supporting speeds up to 10 Gbps, have undergone more than a dozen amendments in the past five years alone, with the rate of updates accelerating. Ultra-Ethernet, currently under development by the Ultra Ethernet Consortium (UEC), which includes leading companies in AI and networking like AMD, Intel, and HPE, is specified to support transfer speeds of up to 224 GB/sec. For context, this is nearly twice the speed anticipated from PCIe Gen 7 and positions Ultra-Ethernet as a direct competitor to Nvidia’s NVLink[i], which offers a bandwidth of 480 GB/sec.

TABLE II: Bandwidth and latency specification of five generations of Ethernet. (**) Ethernet 800GbE not released yet, specifications are estimated

Memory protocols are advancing at a similarly rapid pace. DDR (Double-Data-Rate) memory has reached its sixth generation, while HBM (High-Bandwidth Memory) is now in its third generation, offering a bandwidth of up to 800 GB/sec. These developments in memory protocols are crucial for supporting the growing data needs of AI and high-performance computing (HPC) environments.

Emergence of New Protocols

In parallel with the evolution of existing protocols, entirely new protocols are being designed to address the unique demands of AI accelerator engines, where high bandwidth and low latency are critical. Some of the most groundbreaking new protocols include UCIe (Universal Chiplet Interconnect Express), UAL (Ultra Accelerator Link), and UEC (Ultra Ethernet Consortium). These protocols are specifically engineered to ensure interoperability across diverse ecosystems while maximizing performance, increasing data bandwidth, and improving power efficiency. They also are designed with a emphasis on security to ensure reliability and security of data transfer critical in AI and cloud-based systems.

In summary, the rapid evolution of existing protocols, coupled with the emergence of new ones, is driving the technological infrastructure required to support the next generation of AI and data-intensive applications like autonomous driving vehicles.

Conclusions

The increasing complexity of SoC software, along with the rapid evolution of SoC hardware and rising performance demands, is pushing the design community to continuously innovate and extend the boundaries of interface protocols, all while ensuring efficiency and reliability. Modern interface protocols are designed with flexibility in mind, allowing them to adapt to diverse applications and workloads. This ongoing evolution fosters deeper integration between hardware and software, enabling SoC designs to deliver highly optimized solutions that balance performance, efficiency, and security.

Verifying the functionality and performance of these advanced protocols in sophisticated, software-driven systems demands a blend of high-performance hardware-assisted verification and protocol verification technologies built on proven protocol IP implementations. The rapid pace of protocol innovation necessitates aggressive roadmaps that match IP advancements with verification technologies, ensuring alignment with the tight time-to-market schedules critical for HPC market leaders.

As these protocols evolve, they will play a critical role in shaping the next generation of interconnected systems, expanding the possibilities in fields like artificial intelligence, autonomous systems, and cloud computing.

Sidebar – Seven Decades of Interface Protocols Evolution

The origins of Interface Protocols date back to the early days of computing in the 1950s and 1960s, when they were initially developed to enable communication between different components within a computer. These early protocols were often proprietary and hardware specific. However, over time, they evolved into standardized systems designed to facilitate seamless connectivity, communication, compatibility, interoperability, scalability, and security across devices and systems from multiple vendors.

As technology advanced, Interface Protocols became more sophisticated, secure, and universal, playing a crucial role in ensuring the smooth operation of increasingly complex computing environments. Over the course of seven decades, these protocols have been integral to the evolution of communication technologies.

1950s: Early Computer Systems and Proprietary Protocols
  • In the early days, computers like the UNIVAC and IBM 700 series used proprietary protocols to communicate with peripherals like punch card readers, printers, and tape drives.
1960s: Rise of Serial Communication and Early Networking
  • With the proliferation of peripherals and modems, the need for a protocol to transfer data over simple connections led to the development of the RS-232 standard, one of the most widely used serial communication protocols.
1970s: Networking and Early Standardization
  • In 1973, Xerox PARC developed the Ethernet protocol for local area networks (LANs) that quickly became the dominant standard for connecting computers within an area, enabling faster and more reliable communication.
  • Around the same time, ARPANET conceived the TCP/IP suite to provide a scalable protocol for interconnecting different networks. It set the stage for the Internet.
1980s: Expansion and Global Standardization
  • In the 1980s, the Small Computer System Interface (SCSI) standard was developed to connect peripherals as hard drives, scanners, and others to host computers.
1990s: Internet and Security Computing Peripherals Protocols
  • With the rise of the World Wide Web, HTTP (Hypertext Transfer Protocol) and HTML (Hypertext Markup Language) became fundamental to web communication. HTTP facilitated the transfer of web pages between clients (browsers) and servers.
  • The Universal Serial Bus (USB) standard, introduced in 1996, supported data transfer rates of 1.5 Mbps (Low-Speed) and 12 Mbps (Full-Speed), significantly improved over serial and parallel ports. It became a crucial protocol for connecting devices such as keyboards, mice, and storage drives to computers, offering plug-and-play functionality.
2000s: Wireless Communication and Computing Peripherals Protocols
  • Wi-Fi: Wireless communication protocols, particularly Wi-Fi (based on the IEEE 802.11 standard), became increasingly important in the 2000s as mobile computing and smartphones gained popularity.
  • Bluetooth: Bluetooth emerged as a short-range wireless protocol for connecting personal devices such as headphones, speakers, and wearables.
  • The USB has seen more than 10 upgrades since its inception. The latest USB4 v2.0 released in 2022 supports a max bandwidth of 80 Gbps.
2010s-Present: High Performance and Secure Data Transfer
  • PCIe, Ethernet and Memory protocols underwent several upgrades in rapid progression and emerged as the de-facto standards for AI and datacenters.

[1] The NVLink Switch is the first rack-level switch chip capable of supporting up to 576 fully connected GPUs in a non-blocking compute fabric. The NVLink Switch interconnects every GPU pair at an incredible 1,800GB/s.

Also Read:

Metal fill extraction: Breaking the speed-accuracy tradeoff

How Arteris is Revolutionizing SoC Design with Smart NoC IP

CEO Interview with Ido Bukspan of Pliops


Leveraging Common Weakness Enumeration (CWEs) for Enhanced RISC-V CPU Security

Leveraging Common Weakness Enumeration (CWEs) for Enhanced RISC-V CPU Security
by Kalar Rajendiran on 05-13-2025 at 6:00 am

Information Flow Analysis Cycuity's Unique Approach

As RISC-V adoption accelerates across the semiconductor industry, so do the concerns about hardware security vulnerabilities that arise from its open and highly customizable nature. From hardware to firmware and operating systems, every layer of a system-on-chip (SoC) design must be scrutinized for security risks. Unlike software, hardware is extremely difficult to patch after deployment—making early vulnerability detection critical. The rapidly growing number of hardware CVEs (Common Vulnerabilities and Exposures) reported by NIST underscores the seriousness and increasing sophistication of hardware-based threats.

At the core of these vulnerabilities are underlying weaknesses—the root causes that leave a system exposed. A weakness, as defined by MITRE, is a design flaw or condition that could potentially be exploited. These are cataloged in the Common Weakness Enumeration (CWE) database, while actual vulnerabilities (exploitable instances of those weaknesses) are tracked in the CVE database.

At Andes RISC-V CON last week, Will Cummings, senior security applications engineer from Cycuity, gave a talk on enhancing RISC-V CPU security.

MITRE CWE Framework for Hardware

MITRE’s CWE is a well-established, open framework in software and a growing presence in hardware. It now includes 108 hardware-specific CWEs across 13 categories, providing a structured and actionable way to identify, prevent, and verify fixes for known hardware design weaknesses. Categories include areas such as general logic design, memory/storage, cryptography, and transient execution, among others.

Why CWEs Matter for RISC-V

RISC-V and CWE share a foundational philosophy of security through openness. RISC-V, developed collaboratively as an open standard, aligns with Auguste Kerckhoffs’ principle: a system should remain secure even if everything about it is public. Similarly, CWE is an open, community-maintained repository that promotes transparency and standardization in security classification. This shared ethos makes CWE a natural fit for securing RISC-V designs.

Security analysis of typical RISC-V processor IPs shows that roughly 65% of all 108 hardware CWEs are applicable. In some categories—like core logic, memory, cryptography, and debug/test—over 70% of CWEs are relevant. This makes CWE a powerful tool for prioritizing and addressing security concerns in RISC-V development.

New Microarchitectural CWEs for Transient Execution

In early 2024, MITRE introduced new CWEs targeting microarchitectural weaknesses, developed with contributions from Arm, AMD, Cycuity, Intel, and Riscure. These CWEs address vulnerabilities associated with transient execution attacks, which have gained prominence because of exploits like Spectre and Meltdown:

CWE-1421: Shared Microarchitectural State — Core to most transient execution attacks

CWE-1422: Stale Data Forwarding — Enables forwarding of data to a shared resource

CWE-1423: Integrity of Predictors — Focuses on corrupted branch predictors

These weaknesses fall under CWE-1420: Exposure of Sensitive Information during Transient Execution, which is itself part of the broader hardware design category under CWE-1194.

A Structured Approach to Security: From Weakness to Verification

The proposed CWE-based framework maps weaknesses to specific security protection requirements, which then support yet more specific security properties. These in turn yield evidence from simulation, emulation, or formal methods. This structure helps ensure that every security requirement is grounded in a recognized weakness and backed by verifiable proof.

Cycuity’s Role: Scalable, Architecture-Agnostic Security Verification

Cycuity plays a vital role in this process by offering early-stage security verification for hardware design. Its flagship product, Radix, uses information flow analysis to track how secure assets (e.g., encryption keys) move through hardware, firmware, and across boundaries between secure and non-secure domains. It simulates how attackers might exploit design flaws such as improper access control or leakage via shared microarchitectural resources, enabling early detection—before the chip is even built.

While Radix is well-aligned with RISC-V’s modular architecture, it is architecture-agnostic and effective across all processor architectures and custom silicon as well. It integrates easily into standard SoC development workflows, including simulation, emulation, and formal verification environments. It also supports firmware-in-the-loop analysis and aligns with industry standards like CWE—ensuring security is both proactive and measurable.

Mutual Benefit: MITRE and RISC-V

MITRE and RISC-V International benefit from each other through their shared commitment to openness, transparency, and community collaboration. RISC-V offers a flexible, open platform where MITRE’s security frameworks like CWE can be directly applied and validated. In turn, MITRE enhances RISC-V security by enabling a systematic, standard-based approach to identifying and mitigating hardware design flaws.

Summary

The CWE framework provides a practical, structured methodology to enhance RISC-V security—starting from known weaknesses, mapping them to protection goals, and verifying that those goals are met. Combined with tools like Radix from Cycuity, which enable scalable, architecture-agnostic vulnerability detection, the industry now has the means to address hardware security earlier and more effectively.

Learn more at Cycuity.

Also Read:

CEO Interview: Dr. Andreas Kuehlmann of Cycuity

Cycuity at the 2024 Design Automation Conference

Hardware Security in Medical Devices has not been a Priority — But it Should Be