Banner 800x100 0810

CEO Interview with Jon Kemp of Qnity

CEO Interview with Jon Kemp of Qnity
by Daniel Nenni on 07-25-2025 at 8:00 am

Jon Kemp Bio

Jon Kemp is President of the Electronics division for DuPont and Chief Executive Officer-Elect for Qnity, the planned independent Electronics Company, which will be an independent, publicly traded company spun off from DuPont’s (NYSE: DD) Electronics business upon completion of the intended separation on November 1, 2025.

With more than a decade of leadership experience in electronics at DuPont, Jon’s strategic vision set a pathway for significant portfolio growth. As president of the Electronics & Industrial business, Jon designed business strategy and operations to capitalize on market trends undergoing a major portfolio transformation and growing the business to nearly $6B in net sales in 2024.

Jon’s career at DuPont began in 2005, where he held several key roles, including President of Electronics and Communications and Global Business Director for Circuit & Packaging Materials. After the merger of DuPont and Dow in 2017, he led strategy and M&A for the newly formed Specialty Products Division.

Jon serves on the International Board of Directors for SEMI, where he chairs the Board of Industry Leaders.

Tell us about your company?

DuPont is targeting November 1, 2025, for the completion of its spin-off of the Electronics business, to be called Qnity. While the name is new, our legacy is longstanding. Driven by a purpose to make tomorrow’s technologies possible, Qnity will be one of the largest global leaders in electronic materials and solutions for the semiconductor and advanced electronics industries. As the partner of choice for our customers today, we have a seat at the design table working to advance their technology roadmaps through materials science and engineering solutions that the next generation of advanced computing and connectivity applications require. Qnity will have about 10,000 employees, 40 manufacturing sites, and nearly 20 R&D facilities strategically located near our customers to enable the speed of innovation.

Speaking of the name (pronounced cue-ni-tee), it’s inspired by the symbol for electrical charge, “Q,” and “unity” — a nod to our history and collaboration model with customers. Our electronics portfolio dates back more than 50 years, with a reputation built on speed, quality, and reliability. Looking forward, Qnity will be a global leader in differentiated electronic materials — supplying key consumables used in semiconductor chip manufacturing, advanced electronic materials for packaging and interconnects, thermal management, and innovative assembly and display technologies. We bring a unique, end-to-end perspective on the electronics value chain, and we’re excited about what’s next.

What problems are you solving?

The convergence of industry mega trends like advanced computing and advanced connectivity is accelerating the pace of innovation. AI and related investments are driving an acceleration of digitization and electrification across different industries, including data centers, electric and autonomous vehicles, smart consumer electronics, aerospace, manufacturing, and beyond. These cutting-edge technologies bring new, complex challenges that require a mix of materials science and engineering expertise. At Qnity, we’re committed to a mindset of continuous improvement to address key issues in high-performance computing, thermal management, signal integrity, energy efficiency, miniaturization, and more.

What application areas are your strongest?

We play a critical role partnering with the world’s semiconductor and advanced device manufacturing leaders for semiconductor chip manufacturing, advanced packaging and interconnects, displays, and more. I’ll break that down into a couple of key areas.

For semiconductor technologies, we offer products and expertise that improve chip performance, enhance manufacturing yields, and enable leading-edge node technology for multiple stages of the semiconductor manufacturing process, especially in chemical mechanical planarization (CMP) and lithography.

For interconnects, we provide material solutions that enable the seamless connection of various electronic components to address signal integrity, thermal and power management, advanced packaging, and circuitry technologies.

What keeps your customers up at night?

Supply chain reliability and resiliency is a big concern for customers. In recent years, the COVID-19 pandemic tested the capabilities of supply chains worldwide. Based on what we learned, we adjusted our strategies to emphasize a continuous improvement mindset and stronger local network across our entire business. As a result, we’ve strengthened our entire network — working closely with suppliers and customers alike to boost speed, agility, and reliability. We are committed to supporting our customers with continuous investments in end-to-end supply chain and manufacturing processes, including quality measurement, automation, rapid design and prototyping, and supply chain management.

Sustainability also continues to be top of mind. Customers are increasingly focused on meeting their goals around material usage, waste, and energy efficiency, and they rely on us to provide innovative materials and solutions that help them get there. We’re committed to embedding sustainability throughout our innovation pipeline to support those efforts.

What does the competitive landscape look like and how do you differentiate?

Our competitors range from large multinational corporations to smaller, more specialized regional players. Our global scale, strategic operational footprint, world-class process technologies, and robust portfolio that enables us to work both up and down the electronics value chain make us a compelling partner for our customers.

Simply put, we’re designed to facilitate optimal customer collaboration with strong product performance and consistency, high quality, and reliable supply. We’re focused on providing the materials that enable their technology roadmaps and ultimately power the next generation of semiconductor and other advanced electronics applications.

What new features/technology are you working on?

Partnering from the start of the design process through delivery of high-volume manufacturing, we have a seat at the design table working to advance customers’ technology roadmaps. We support the most advanced designs in both logic and memory (including N3, N2, advanced DRAM, and HBM), advanced packaging (including 2.5D, 3D, and heterogeneous integration), and thermal management while helping customers achieve improved performance, efficiency, and sustainability.

We’ve recently announced new product launches in chemical mechanical planarization (CMP) pads, post-CMP cleans, high-selectivity etchants, photoresists, and extreme ultraviolet lithography (EUV) underlayers for semiconductor chip manufacturing, as well as thermal solutions to achieve superior thermal performance and long-term stability for next-gen server and data center applications.

How do customers normally engage with your company?

Our disciplined and experienced team is laser focused on delivering value for our customers. We have a strong track record of collaborating to empower advanced technology roadmaps, working side by side at the design table with customers’ technology and engineering teams. This partnership extends from the laboratory to the manufacturing line, where we work to optimize and customize our solutions to maximize yields and performance in our customers’ manufacturing processes. We’re excited to continue building on this approach, looking to bring an even greater speed of innovation, higher quality, and more reliable supply to our customers.

If you have a technology challenge or want to learn more about what we’re working on next, get in touch with our team at qnityelectronics.com.

Also Read:

Executive Interview with Matthew Addley

CEO Interview with Jonathan Reeves of CSignum

CEO Interview with Shelly Henry of MooresLabAI


Griffin Securities’ Jay Vleeschhouwer on EDA Acquisitions and Startups

Griffin Securities’ Jay Vleeschhouwer on EDA Acquisitions and Startups
by Bob Smith on 07-25-2025 at 6:00 am

jay vleeschhouwer SemiWiki Interview

Jay Vleeschhouwer, Managing Director of Software Research at Griffin Securities, is a noted financial analyst who does a yearly presentation on the State of EDA during the Design Automation Conference (DAC). This year was no exception. He and I spent a memorable afternoon discussing the Synopsys-Ansys merger and startups. A condensed version of our talk follows.

Big EDA includes Cadence, Siemens EDA and Synopsys, Ansys and Keysight. Now that the Synopsys-Ansys merger closed, do you see changes in the landscape?

Since this combination occurred, it is the largest Engineering Software company on the planet by revenue and by backlog.

In terms of financial profile, this is by far the largest company in all of Engineering Software, a well over $30-billion industry, including all the parts of that market—AEC, EDA and Technical Software. Synopsys plus Ansys is the largest part of that. The pro-forma backlog is the largest in the industry.

From an operational or strategy point of view, Ansys EDA is roughly a fifth of Ansys’ business, making it the fourth largest EDA company. Combining the largest EDA company with the fourth largest, Synopsys will pick up another roughly three and a half points of market share from Ansys’ EDA.

More broadly, the question is how Synopsys will integrate, employ and leverage the four fifths of Ansys that is not strictly EDA. It’s not Ansoft and Apache, the two entities that mostly comprise Ansys’ EDA, and it ties into the convergence theme.

We pointed out that this is more than $30-billion manifestation of convergence. Keysight, on a smaller scale, is pursuing its strategy for convergence through acquisitions, including last year’s acquisition of ESI, a small French simulation software company. Two pending Keysight acquisitions will be made in the photonics and optical area. Both are conditional upon the close of the acquisition of Ansys, something that the regulators required as a condition of approval.

This will certainly build out the Engineering Software portfolio for Keysight and is complementary to what they’re doing. They, too, speak about multi-physics in the same way that Ansys has been speaking about multi-physics as part of its strategy for years. Of course, Synopsys acquired the largest multi-physics software company on the planet with Ansys. In terms of changing the landscape, that is a function of the four fifths of Ansys that is not EDA, because the one fifth has already been working closely with Synopsys since their 2017 technical integration agreement. That will help smooth the integration, at least in terms of whether their products are integrated.

Beyond that, there’s always the question in any acquisition as to the balance between leaving the operations and the portfolios as they were. In other words, let them continue doing what they were doing if they were doing it well and/or absorbing, integrating and leveraging those portfolios into the buyer’s portfolio.

That roadmap is something that we would be interested in hearing more about in terms of its purely EDA aspects and then the convergence aspects. It will also be interesting to see if they update the initial revenue and cost synergies from SNUG in March 2024. They gave some specific revenue and cost synergies that they foresaw in terms of combined products and cost savings, which would seem to be, on the cost side, achievable.”

Will this make it harder for startups?

It’s always hard for startups in this industry because they have so much to prove when Synopsys, Cadence and Siemens are investing substantial amounts necessary in their portfolios. Opportunities for startups will always be a function of provable, superior performance in critical areas.

One of the interesting things for investors is how much of the technology requirements are foreseeable. The semiconductor roadmap is so well laid out and defined in terms of nodes that you can work backwards to what critical tools will be needed to solve new or upcoming problems. Opportunities are there for startups to correctly identify what those needs are and then be able to prove sufficiently superior performance in benchmarks for engineering groups to adopt a proof-of-concept tool or complementary tool.

The fact that the two largest companies have accumulated so much more share organically and inorganically and cover so much more of the EDA portfolio landscape, makes it’s harder for startups. I wouldn’t want to say it isn’t possible or it’s infeasible, but we know what the benchmarks or conditions for success have to be.”

About Jay Vleeschhouwer

Jay Vleeschhouwer, Managing Director of Software Research at Griffin Securities, has more than 40 years of research analyst experience in the technology sector, including software, semiconductors and computer hardware. Vleeschhouwer does a yearly presentation on the State of EDA during the Design Automation Conference (DAC). The slides can be found at: DAC presentation (June 2025) 2.pdf

Note: The ESD Alliance will host a three-hour design track “The Convergence of Semiconductor Manufacturing and Design” Tuesday, October 7, from 1 p.m. until 4 p.m. during SEMICON West in Phoenix, Ariz.

Also Read:

Scaling 3D IC Technologies – Siemens Hosts a Meeting of the Minds at DAC

Analysis and Exploration of Parasitic Effects

Siemens Proposes Unified Static and Formal Verification with AI


Security Coverage: Assuring Comprehensive Security in Hardware Design

Security Coverage: Assuring Comprehensive Security in Hardware Design
by Daniel Nenni on 07-24-2025 at 10:00 am

Assuring Comprehensive Security in Hardware Design

As hardware systems become increasingly complex and security threats grow more sophisticated, ensuring robust hardware security during the pre-silicon phase of development is more critical than ever. Cycuity’s white paper outlines how its Radix platform enables engineers to verify, visualize, and measure the effectiveness of hardware security throughout the design lifecycle, ultimately ensuring compliance, minimizing vulnerabilities, and building trust with customers, auditors, and regulators.

Radix provides security coverage through a data-driven approach that quantifies how thoroughly security verification has been applied to a hardware design. Like functional coverage in traditional verification, security coverage ensures that protection mechanisms and policies are not only in place but are rigorously exercised and validated during simulation or emulation. This enables design teams to identify and address vulnerabilities early, avoiding expensive post-silicon fixes and reducing overall risk.

The verification of hardware security features is split into two main activities: functional security verification and security protection verification. Functional security verification ensures that security components behave as expected. For example, a test might check whether a cryptographic key reaches the AES encryption block within a specified time frame when requested. This aspect of verification is often addressed using traditional techniques like formal verification, assertions, and directed tests.

In contrast, security protection verification addresses broader questions, such as whether sensitive data might inadvertently escape a chip’s boundary. This approach verifies that protections are in place to prevent unintended or unauthorized data flows, and it enables a more system-level perspective. While functional verification focuses on specific, localized behaviors, protection verification considers the full design over extended time periods and wider spatial contexts.

Cycuity’s Radix technology supports both types of verification and introduces security coverage metrics to evaluate the thoroughness of these efforts. These metrics show how well security requirements—like ensuring a key never exits a secure module—have been tested under various conditions. The platform allows security teams to define assets, specify security rules, and track whether these rules are upheld in practice. When rules are violated or not sufficiently exercised, Radix offers powerful debug tools including waveform, RTL, and schematic views that pinpoint information flow issues.

The concept of a protection boundary is central to Radix’s methodology. This refers to circuit logic that confines secure data within specific areas, preventing leakage or misuse. For instance, a control signal might be required to gate the release of encrypted data, thereby establishing a hardware-based boundary. Security coverage tracks whether this boundary has been properly implemented and whether all relevant paths leading to and from it have been adequately tested.

To calculate security coverage, Radix monitors information flow between a source (like a secure key) and its destination (such as a system output). Toggle coverage—a standard verification metric used to track how often signals change—is collected across test runs and merged into a comprehensive database. Radix then analyzes this database to produce a security coverage metric, which is visualized through its user interface. This GUI highlights problem areas and enables cross-probing into schematics and RTL code for further analysis.

Low security coverage may result from several factors, including misconfigured protection boundaries, insufficient test coverage, or flawed RTL implementations. Radix helps identify the root cause and allows teams to adjust designs or add targeted tests. This iterative process is akin to achieving functional coverage and is essential for preparing a design for final security signoff.

The value of security coverage extends beyond internal development. The reports generated by Radix offer credible, visual, and actionable evidence of compliance with standards like ISO 21434 and the NIST cybersecurity framework. These reports are useful for customers, regulators, and auditors seeking transparency and assurance.

In conclusion, Cycuity’s Radix platform brings much-needed rigor and visibility to pre-silicon hardware security. By defining, measuring, and analyzing security coverage, Radix empowers engineering teams to deliver secure silicon with confidence. It bridges the gap between design intent and implementation reality, helping organizations not only meet compliance requirements but also enhance trust, accountability, and resilience in their hardware products.

You can download this whitepaper here.

Also Read:

Podcast EP287: Advancing Hardware Security Verification and Assurance with Andreas Kuehlmann

Leveraging Common Weakness Enumeration (CWEs) for Enhanced RISC-V CPU Security

Cycuity at the 2024 Design Automation Conference


Scaling 3D IC Technologies – Siemens Hosts a Meeting of the Minds at DAC

Scaling 3D IC Technologies – Siemens Hosts a Meeting of the Minds at DAC
by Mike Gianfagna on 07-24-2025 at 6:00 am

Scaling 3D IC Technologies – Siemens Hosts a Meeting of the Minds at DAC

3D IC was a very popular topic at DAC. The era of heterogeneous, multi-chip design is here.  There were a lot of research results and practical examples presented. What stood out for me was a panel at the end of day two of DAC that was hosted by Siemens. This panel brought together an impressive group of experts to weigh in on what was really happening and what the future looked like. I thoroughly enjoyed the honest and direct comments from this group. In case you missed it, there is a link coming so you can experience the whole event. To set the stage, here is a short summary of the excellent discussions about scaling 3D IC technologies as Siemens hosts a meeting of the minds at DAC.

The Distinguished Panel

This is not hyperbole. Here’s a quick summary of who was on the stage. The lineup is also shown in the graphic above.

Ed Sperling, Editor in Chief, Semiconductor Engineering. Ed has either written for or been Editor in Chief for most of the mainstream technology publications for over 35 years. He was also a contributing editor for Forbes. Ed knows the hard questions and knows how to pose them.

Dr. Jeff Cain, VP of Engineering, Chipletz. Jeff represented the startup on the panel. He has been developing semiconductor solutions for 30 years, including half that time at Cisco. Chipletz is developing Smart Substrate™ products, an advanced packaging technology for chiplet-based design.

Letizia Giuliano, VP product marketing & management, Alphawave Semi. Letizia has been involved in marketing and engineering for advanced semiconductors for over 20 years at companies such as ST Microelectronics and Intel. She has been with Alphawave Semi for over five years.

Lalitha Immaneni, VP architecture, advanced design & customer enabling, technology development, Intel. Lalitha is an engineering leader with 30 years of experience in cross functional global engineering management, strategic planning/execution, and operations management.

Subi Kengeri, Corporate VP & GM, Systems to Materials, Applied Materials. Subi has over 30 years of experience in the design and manufacture of advanced semiconductor products. His career began at TI, and he has worked at companies such as Analog Devices, Virage Logic, TSMC, and GLOBALFOUNDRIES. He has been at Applied Materials for over five years.

Dr. Deepak Kulkarni, Senior Fellow, AMD. Deepak has worked in artificial intelligence, communication, and advanced packaging for 20 years. Dr. Kulkarni spent 15 years at Intel before joining AMD.

Tony Mastroianni, senior director, 3D IC Solutions Engineering, Siemens EDA.  Tony has been developing advanced chip designs and advanced EDA tools for over four decades. I’ve known him for a long time, from the early days at RCA to later at eSilicon, where Tony was part of a pioneering team in multi-die design. He’s also worked at Silicon Design Labs, Silicon Compiler Systems, Mentor Graphics, and ASIC Alliance before joining Siemens EDA about five years ago.

Some Snippets of the Conversation

Here are some questions posed by Ed Sperling and summarized responses from the panelists.

From your broad vantage point, where do you see 3D IC trends taking off?

Tony Mastroianni: Hyperscalers have been doing this for some time. Automotive is starting to adopt the technology. Mil aero and defense are joining in as they really need this technology to advance the class of designs before them.

The current driver for 3D IC is AI data centers. A lot of those designs are driven by the hyperscalers and will never be sold commercially. Is this the start of a real market, of just a few customers who will drive massive demand?

Deepak Kulkarni: AI data center applications are driving early progress, but these programs are validating technologies that will see broad use in many designs if they are proven to be efficient and cost-effective. AMD 3D V-Cache is an example of this.

Lalitha Immaneni: We would like to see the ideal world of re-usable chiplets across various applications. This utopian concept of a chiplet marketplace is desirable, but there is still work to do in the ecosystem. Standards will be needed, things like performance specs with clear technology and product targets for example.

Many have said 3D IC is all about the interconnects, moving data from one area to another. Is this a growing market or is it still just a bunch of experiments?

Letizia Giuliano: To move away from a monolithic design style will require a lot of up-front planning on how interfaces will be done. Each interface requirement will need a dedicated solution. One configuration for all won’t work. Each channel interconnect will have unique requirements. Flexibility will help, but there must be dedicated, proven configurations. One must carefully manage power, area, and latency budgets. These efforts are underway and will be broadly valuable.

A lot of the new devices are looking at completely new materials. There are structural and thermal issues, with the possibility for things to go wrong. What are you hearing from your customers?

Subi Kengeri: To step back a bit, chiplet evolution has been driven by three key factors. Defect density drove the need for smaller dies, ease of design re-use in a proven technology was another driver and the ability to integrate heterogeneous functions in the best-fit technology was a third. We now need high levels of signal, power and thermal performance for this to all work. Applied Materials spends about $3.5 billion each year to address these challenges by validating new materials. We must stay ahead of the rest of the semiconductor industry.

Jeff Cain: What we do at Chipletz is work on next generation substrate technology. We want to go back to the OSAT model. That is, source multiple components and reliably package them for final delivery. We believe the technology to enable this will be ready in about a year. Our vision is to remove dependence on one supplier and make the process more predictable.

Tony Mastroianni: There is promising work on very large substrates. This could be a few years out, but it will enable massively large systems. Many technologies must come together to enable these designs. AI will play a key role. It’s not just a packaging challenge; it’s a system design challenge. The tools involved must work well together and this is an area Siemens EDA is focusing on. Managing the data and meta data is another focus area for us. This will be needed to enable AI solutions. And we’re working with the ecosystem to develop vendor neutral standards that will be needed as well.

In Summary

There were many more useful conversations during this one-hour event. I hope the snippets above give you a feeling for where the conversation went. I left the event with an upbeat outlook for the future of 3D IC design. Looking back, the companies on the panel are competitive with each other. The competition is still there, but the need to collaborate to pave the way to next generation technology was quite strong. This is new.

That attitude of “we all win” was refreshing and created an optimistic view of the future for me. All the companies on this panel are doing important and high-impact work to advance the cause. You can watch the entire event here. And you can learn more about what Siemens EDA is doing to advance 3D IC technologies here. And that’s how Siemens hosts a meeting of the minds at DAC.


Analysis and Exploration of Parasitic Effects

Analysis and Exploration of Parasitic Effects
by Daniel Payne on 07-23-2025 at 10:00 am

parasitc elements min

With advanced semiconductor processes continuing to shrink, the number and complexity of parasitic elements in designs grows exponentially contributing to one of the most significant bottlenecks in the design flow. Undetected parasitic-induced issues can be extremely costly, often resulting in tape-out delays.

Silvaco addresses this challenge with its EDA tool, Viso, which enables intelligent exploration of parasitic effects. Viso helps designers identify and resolve the root causes of parasitic-induced issues early in the design cycle, significantly improving design reliability and reducing time-to-market.

Carlos Augusto Berlitz, PhD , Corporate AE at Silvaco presented a webinar on their Viso tool, so here’s what I learned about its features for parasitic exploration, analysis and debug:

  • RC Delays and Resistances: global node-to-node and detailed calculation
  • Net-to-Net Coupling: sensitive nets verification, coupling map
  • Net comparison: Resistances, RC delays,
Coupling capacitances
  • Sanity Checks: DC path, instances, dangling nodes, etc.

Users can find detailed information about their parasitics, like layer contributions, a heatmap for visualization, tables and charts.  You don’t have to resort to time-consuming manual analysis approaches and excessive SPICE simulations.

Carlos demonstrated Viso with an example high-speed circuit for LVDS operating at 1.2V where a differential signal is used to reduce electromagnetic noise.

Comparing the pre-layout netlist versus post-extracted simulation shows that the duty-cycle had been degraded, so the challenge was to find out how to make the duty-cycle more balanced. Shown in blue is the plot of the schematic netlist with a 53% duty-cycle, and the orange plot is from the post-extracted netlist with a 37% duty-cycle.

Viso was used to look at different nets in the path of the input signal to the output to determine the biggest net contributors to RC delays. Here’s the cumulative RC delay on four nets in the path, where blue is the shortest delay and red is the longest delay.

Once a net has been identified, the tool further shows the highest contributors to the path delay, speeding up the analysis process, all without having to run a SPICE circuit simulator. Tool users can visualize the main layer contributors in tables and charts and even see how sensitive the results are to changes in each layer.

Diving deeper, the tool shows parasitic resistance by each layout segment, so designers and layout engineers know exactly what in the layout is contributing most to RC delays. Knowing where in the layout the largest RC contributors are located allows them to make corrections with precision and fewer iterations. The what-if feature allows you to recalculate the analysis quickly to explore the impact of fixes all without changing the layout. Based on the RC analysis so far, a layout change was made and then the parasitics extracted, yielding an acceptable improvement to a 40% duty-cycle.

Another useful feature in Viso is comparing two nets to verify that they are matched by parasitic resistance, RC delay or capacitive couplings. Differential pairs with imbalances can be quickly spotted. From the example LVDS it was discovered that the RC delay and parasitic capacitive couple were balanced on nets rxin1 and rxin2, but the balance of parasitic resistances between both paths could be improved as shown in the resistance path comparison plot with red points.

The resistance contribution of layer C2 between nets rxin1 and rxin2 were not balanced, identified by Viso in the bar chart comparison.

Capacitive coupling between nets can also be compared to see how it impacts RC delays. When you need to balance capacitive coupling, this is a quick method to use. Another useful feature is analyzing capacitive coupling comparisons by aggressor nets, something not easy to do with traditional parasitic analysis methods.

Summary

With the newer approach using Viso your team can perform analysis of parasitic effects more thoroughly, in less time, with fewer resources and fewer iterations. Using Viso allows engineers to debug and fix parasitic issues early, thus reducing lengthy design cycles. Finding key layout segments where parasitics limit performance is now possible, even tasks like parasitics balancing and matching of differential signals and differential pairs becomes feasible. IC design teams will benefit by adding Viso to their tool flow.

Watch the archived webinar online.

Related Blogs


Siemens Proposes Unified Static and Formal Verification with AI

Siemens Proposes Unified Static and Formal Verification with AI
by Bernard Murphy on 07-23-2025 at 6:00 am

Siemens Proposes Unified Static and Formal Verification with AI min

Given my SpyGlass background I always keep an eye out for new ideas that might be emerging in static and formal verification. Whatever can be covered through stimulus-free analysis reduces time that needn’t be wasted in dynamic analysis, also adding certainty to coverage across that range. Still, advances don’t come easily. Static analyses are infamously noisy, and formal methods equally infamously demand significant verifier expertise. Apps have made formal proofs more accessible within a bounded set of use-cases, and formal and/or AI complementing static methods have made static analyses more tractable in linting and domain crossing applications. What can be done to broaden the useful scope of static/formal? AI, no surprise, is playing an increasing role in answering that question.

Unifying Stimulus-Free Verification

The promise of both static and formal methods is that whatever they can prove, they can do so quite generally without need to define stimulus; whereas dynamic verification can prove correctness only within the bounds of the use-cases you test. The tradeoff is that useful static and formal proofs are quite restricted in application. Siemens aims to loosen those limits through a new product release they call Questa One SFV (stimulus-free verification), integrating static and formal analysis together with AI.

One objective here is to simplify license management — one license pool to draw on whether you are running static checks, formal checks, or AI-operations, which helps simplify access and improve license utilization. This freedom applies equally to parallel usage, again maximizing utilization no matter what licenses are needed for a particular task.

Questa One Stimulus Free Verification

Importantly under this umbrella, as one example, they can combine natural language understanding to parse a user-requested check, and from that generate and verify a property assertion. Similarly, linting checks can be filtered by formal methods to suppress much of the noise common to such static analysis, leaving only truly suspicious cases for DV/design resolution.

In this paper Siemens hints at applying this suite of capabilities to a more challenging requirement. While Questa One already supports a formal VIP library covering a range of needs, including AMBA compliance, it is not uncommon for design houses to adapt protocol standards to their own needs. For example, they might have a modified version of AHB for which they need their own compliance properties suite. Siemens asserts that Questa One SFV could handle this through generative methods, starting from the custom spec as input. They also suggest that partitioning large formal proofs can be automated through SFV, adding scalability to formal methods.

What else might be possible?

Continuing the theme of mixing AI, static, and formal, bug triage of various flavors is already in active deployment, especially around CDC and RDC analysis where I know that unsupervised learning techniques are used for grouping a large number of error reports into likely common root causes.

Another interesting application came up in a recent DAC panel for detecting naming-convention compliance violations (remember those from the Reuse Methodology Manual?). Turns out that adhering to naming conventions is becoming more important in support of equivalence checking and also, I would guess, in AI learning and inference support across design and (formal) testbench file structures. Yet, no surprise, designers aren’t always conscientious in following these rules (my design simulates and synthesizes correctly, who needs naming rules?).

More generally, I am holding out hope that AI can help scale the app concepts to something more flexible. The AHB application mentioned earlier is one example.

Interesting ideas. You can read the white paper HERE.


Memory Innovation at the Edge: Power Efficiency Meets Green Manufacturing

Memory Innovation at the Edge: Power Efficiency Meets Green Manufacturing
by Admin on 07-22-2025 at 10:00 am

Figure 1

By Tetsu Ho

With the ever-increasing global demand for smarter, faster electronic systems, the semiconductor industry faces a dual challenge: delivering high-performance memory while reducing environmental impact. Winbond is meeting this challenge head-on by embedding sustainability into every layer of its operations—from green manufacturing processes to low-power memory innovations designed for AI, automotive, and industrial applications.

One of Winbond’s most significant innovations is its Customized Memory Solution (CMS)—a next-generation DRAM portfolio tailored for application-specific performance and power efficiency. Evolving from Winbond’s extensive DRAM heritage, CMS integrates advanced low-power architectures and 3D packaging like KGD 2.0 to meet the needs of edge AI, industrial automation, smart cities, and healthcare systems.

Green Manufacturing Starts at the Fab

Winbond’s fabrication site at Taichung, the Central Taiwan Science Park, is one of its most energy-efficient semiconductor facilities. Built with sustainability in mind, the Taichung Fab now operates on 90% renewable electricity, achieving a 60% reduction in carbon emissions compared to 2021. Its cleanroom and utility systems are optimized under the ISO 50001 energy management standard, while ISO 14064-1 and ISO 14067 certifications ensure accurate tracking of facility and product-level emissions.

Manufactured in Winbond’s green-certified fabs, CMS is an essential component of the company’s sustainable product strategy. Every CMS device benefits from a supply chain powered by 90% renewable electricity and is tracked through Winbond’s proprietary carbon accounting system, ensuring visibility of Scope 1, 2, and 3 emissions.

Winbond has committed to achieving RE50 by 2030, with half of its global energy consumption sourced from renewable sources. Its proprietary carbon inventory system accounts for Scope 1, 2, and 3 emissions, providing the company and its customers with clear visibility into environmental impact across the supply chain. This digital-first approach aligns with Taiwan’s carbon fee policy and prepares Winbond’s ecosystem for the rising demand for audited ESG data.

1.2V Serial NOR Flash: First in the Industry for Ultra-Low Voltage

Reducing system power begins at the component level. Winbond’s 1.2V Serial NOR Flash—the first in mass production—is a key step forward for energy-conscious design. This memory is built on a 45nm process, which significantly lowers active and standby power. It is ideal for battery-powered applications such as smart meters, wearables, and medical monitoring devices. This NOR Flash reduces the need for voltage regulators and extends battery life.

This enables designers to meet efficiency targets without compromising performance or reliability.

GP-Boost DRAM: High-Throughput, Low-Power Memory for Edge Intelligence

Winbond’s GP-Boost DRAM is designed for applications that require fast, continuous data processing at the edge, such as machine vision, real-time industrial control, or AI-enabled sensor hubs. Based on 20nm and advanced 16nm process nodes, it delivers high bandwidth and thermal stability while maintaining tight power budgets.

GP-Boost DRAM plays a key role in enabling real-time decision-making in embedded AI systems, without the need for active cooling or large energy reserves, making it a strong fit for both industrial and automotive markets.

CMS has already enabled a broad spectrum of intelligent systems—from motion control in factory automation and electric vehicle (EV) chargers, to secure medical devices, smart home systems, and AI-powered sensor hubs in smart cities. These deployments demonstrate how customized memory can extend device lifetime, reduce power draw, and support carbon reduction across vertical markets.

Octal and SLC NAND: Designed for Harsh Environments and Secure Performance

High-endurance Flash memory is essential in mission-critical systems, especially in automotive, industrial automation, and factory control applications. Winbond’s OctalNAND and SLC NAND Flash are built to withstand wide temperature fluctuations, intensive read/write cycles, and demanding performance requirements over long deployment periods.

Built-in bad block management, OTA (over-the-air) update support, and functional safety compliance help reduce failure rates and lower the environmental cost of field servicing or component replacement.

Embedded Security with Energy Efficiency: TrustME® Secure Flash

As embedded systems become more connected and vulnerable, data protection and sustainability are increasingly linked. Winbond’s TrustME® Secure Flash enables secure firmware storage, cryptographic key protection, and authenticated updates—all within a low-power profile optimized for edge and automotive environments.

By supporting secure OTA updates, Secure Flash helps extend device lifetime and reduces unnecessary physical servicing, ultimately lowering emissions and costs.

A Proven Commitment to ESG and Innovation

Winbond’s commitment to environmental, social, and governance (ESG) principles is embedded across its global operations—from how it sources raw materials to how it powers its fabs. Its sustainability performance has earned consistent external recognition, including the Taiwan Corporate Sustainability Awards (TCSA) ESG Award, the Corporate Sustainability Report Platinum Award, and a spot on the Top 100 Global Innovators list, acknowledging its technical leadership and responsible business practices.

A dedicated ESG Committee governs sustainability strategy, aligning with internationally recognized frameworks such as the Task Force on Climate-related Financial Disclosures (TCFD), the Responsible Business Alliance (RBA), and the Greenhouse Gas Protocol. The company publishes annual disclosures on emissions, energy use, and supply chain risk, including a Human Rights Due Diligence Report and a TCFD-aligned climate risk assessment.

Winbond’s environmental credentials are supported by third-party verified certifications, including but not limited to:

  • ISO 14001 (Environmental Management Systems)
  • ISO 14064-1 (Greenhouse Gas Emissions Inventory)
  • ISO 14067 (Product Carbon Footprint)
  • ISO 14046 (Water Footprint)
  • ISO 50001 (Energy Management System)
  • ISO 45001 (Occupational Health & Safety)

Winbond exceeds industry requirements by actively tracking Scope 3 emissions across its supply chain, utilizing its digital carbon accounting system. As a member of the Taiwan Climate Partnership (TCP), the company also works closely with other leading manufacturers to support shared decarbonization goals across the sector.

Whether switching to renewable energy early on or designing ultra-efficient memory products like the 1.2V Serial NOR Flash, GP-Boost DRAM, or CMS, Winbond is helping its customers reduce the carbon footprint of their end products. They’re shaping a more sustainable and resilient future for the semiconductor industry with a clear focus on transparency, measurable results, and practical action.

Conclusion: Sustainable Memory That Performs

Winbond’s sustainable semiconductor strategy is anchored in three core pillars: green fabs, green products, and green partnerships. By combining technical innovation with measurable sustainability gains, Winbond empowers OEMs and developers to create smarter, faster, and more environmentally responsible systems—from AI vision modules and electric vehicles to smart homes and industrial controllers.

Whether extending battery life, improving thermal performance, securing over-the-air updates, or ensuring environmental compliance, Winbond delivers the memory foundation for today’s high-performance, low-impact designs. Discover the complete portfolio at www.winbond.com.

Tetsu Ho, Technology Manager, holds a master’s degree in Industrial Engineering from National Tsing Hua University. He joined Winbond in 2005 and subsequently served as Product Engineer, Product Development Manager, and Marketing Technology Manager, and is responsible for promoting the EU DRAM market.

Also Read:

Sophisticated soundscapes usher in cache-coherent multicore DSP

Podcast EP297: An Overview of sureCore’s New Silicon Services with Paul Wells

AI Booming is Fueling Interface IP 23.5% YoY Growth


Alchip Launches 2nm Design Platform for HPC and AI ASICs, Eyes TSMC N2 and A16 Roadmap

Alchip Launches 2nm Design Platform for HPC and AI ASICs, Eyes TSMC N2 and A16 Roadmap
by Daniel Nenni on 07-22-2025 at 6:10 am

Alchip TSMC N2 announcement SemiWiki

Alchip Technologies, a global leader in high-performance computing (HPC) and AI infrastructure ASICs, has officially launched its 2nm Design Platform, marking a major advancement in custom silicon design. The company has already received its first 2nm wafers and is collaborating with customers on the development of high-performance ASICs built on this next-generation node. This milestone positions Alchip among the earliest adopters of TSMC’s leading-edge technologies, with a clear roadmap that extends to both TSMC’s N2 (2nm) and upcoming A16 (1.6nm) process technologies.

Advanced Chiplets and Packaging for 2nm Compute Systems
The new design platform delivers a full-stack methodology for building compute-dense, power-efficient ASICs on TSMC’s N2 node. It supports a broad set of chiplet integration strategies, enabling 2nm compute dies to work in tandem with 3nm or 5nm I/O chiplets. This approach supports a heterogeneous architecture that optimizes performance, yield, and design flexibility—critical in the post-Moore’s Law era.

Alchip’s platform also supports TSMC’s CoWoS®-S/R/L 2.5D/3D packaging, System on Integrated Chip (SoIC®-X) bonding, and is on track to support System on Wafer (SoW™) packaging for 3DICs. Additionally, die-to-die (D2D) IP and IO chiplet development are built into the platform, ensuring robust interconnect and thermal-aware design.

Overcoming N2 Design Complexity

TSMC’s N2 process represents its first gate-all-around (GAAFET) node, replacing FinFETs with nanosheet transistors. This shift offers notable benefits in performance, power efficiency, and area (PPA), with up to 10–15% speed gain or 25–30% power reduction over N3E. However, it also introduces significant layout and manufacturing challenges. These include tighter design rules, more complex power and signal routing, and new constraints around nanosheet stacking and variability.

Alchip’s 2nm Design Platform is engineered to address these issues head-on. The design flow is optimized to manage the increased diversity of standard cells and the denser transistor layouts introduced at N2. By anticipating placement, routing, and power integrity challenges early in the design process—before floorplanning or clock tree synthesis—Alchip reduces turnaround time while enhancing design predictability.

Power and Thermal Density Solutions

At 2nm, power and thermal density per square millimeter rise significantly due to increased gate counts and faster switching. Alchip’s methodology addresses this with thermal-aware floorplanning, advanced packaging co-optimization, and strategic power distribution planning. Even in the absence of native 2nm I/O chiplets, the platform supports mixed-node integration using 3nm and 5nm I/O for early deployment and yield optimization.

First-Pass Success, SoIC Demonstration, and A16 Transition

Alchip’s 2nm test chip achieved first-pass silicon success, validating both its methodology and IP stack. The design featured the company’s proprietary AP-Link-3D I/O interface, demonstrating full compatibility with SoIC-X chiplet interconnect. These results reinforce Alchip’s leadership in 3D integration and position it well for TSMC’s future process nodes, including A16™, which introduces backside power delivery and further transistor performance improvements.

Positioning for the TSMC N2 Era

TSMC began risk production on N2 in late 2024, with volume ramp expected in the second half of 2025. N2 introduces nanosheet GAAFETs, enabling better electrostatic control and design flexibility with variable channel widths. Alchip’s 2nm platform ensures customers are equipped to tap into these benefits while mitigating the risks associated with early-node development.

“We’re open for business and ready to support customers’ 2nm demand,” said Erez Shaizaf, CTO of Alchip Technologies. “Our new platform positions us as an industry leader, not only at 2nm but as we prepare for TSMC’s A16 era.”

“The is really just another milestone on our 2nm roadmap. Alchip’s 2nm platform is ready to work with key IP vendors, and we’ve been actively engaged with a couple of different companies on their 2nm ASIC developments. We anticipate this to be a very popular node for high-performance computing innovation,” explains Dave Hwang, General Manager, North America Business Unit.

Contact Alchip

About Alchip

Founded in 2003 and headquartered in Taipei, Alchip Technologies Ltd. is a leading global ASIC provider, specializing in HPC and AI applications. Its services span ASIC design, chiplet integration, 2.5D/3D packaging, and manufacturing management. Alchip serves top-tier system companies worldwide and is listed on the Taiwan Stock Exchange .

Also Read:

Alchip’s Technology and Global Talent Strategy Deliver Record Growth

Outlook 2025 with David Hwang of Alchip

Alchip is Paving the Way to Future 3D Design Innovation


Agile Analog Update at #62DAC

Agile Analog Update at #62DAC
by Daniel Payne on 07-21-2025 at 10:00 am

agile analog min

On the last day of DAC 2025 I met with Chris Morrison, VP of Product Marketing at Agile Analog, to get an update. Their company provides Analog IP, the way you want it, and I knew that they had internal tools and a novel methodology to speed up the development process. This year they have started talking more about their internal IP automation tool, Composa.

Why use an analog IP automation tool?

Chris told me that there’s a list of challenges with conventional analog design: shortage of analog designers, too many processes and options, advanced nodes are difficult with new parasitics, and manual analog design is way too slow. Their answer was to address these challenges by using analog IP automation.

The approach is to combine Analog experts inside of the company along with SW developers to auto-generate schematics for IP. Their Composa tool works with OpenAccess, the API from the Si2 OpenAccess coalition. Composa users first define their requirements, like SNR, supply rails, bandwidth and other specifications. Then, there are a set of common analog building blocks, elements with their own characteristics that are combined to define the new IP. For example, an Analog to Digital Converter (ADC) needs a sample switch, input buffer and other blocks. These lower-level blocks are combined, a PDK is selected for a specific process, then the tools optimize the transistor W/L sizes using math equations.

A traditional approach to circuit sizing involves running lots of SPICE simulations, but Composa uses a much faster method of equation-based device sizing. With circuits that have feedback, then some SPICE runs could be used. The optimization process with Composa is not CPU intensive at all, typically requiring only a few minutes of CPU time to come up with the proper device sizes to meet your specifications. Full verification of the analog IP  is done with a traditional flow, including many Monte Carlo simulations. There’s little, or no, manual-tweaking of device sizes required to meet your specs.

With Composa the engineers at Agile Analog can get to the exact specs for an IP block in minutes, not days or weeks of manual efforts. Even changing to a different PDK will show new results in just a few minutes.

Customers of Agile Analog span a broad range of sectors and applications: Power Management ICs (PMIC), data converters, chip health and monitoring, PVT, IoT, defense, , security, anti-tamper IP, voltage glitching, clocking attacks, electromagnetic injection. Defense customers could be designing at 165nm or 130nm process nodes, datacom at 3nm, so Composa creates analog IP for quite a wide spectrum of processes.

Digital designers have used logic synthesis to retarget process nodes for decades  and this is now possible with analog design. If a customer wants a new oscillator, then Composa can be used to create a schematic and layout. Composa is an expert system  – it is repeatable, human understandable, and device sizing is not a probability problem.

Composa is a no-code system for users, its parameters are typed in a YAML script to configure what you want. Internally they just fill in the YAML to control each IP block generator. Composa has changed over time by expanding the element library, and verifying that it works across all PDKs, including some tuning for a new PDK. The Composa tool has created some 60 new IPs in the last 2 years..

Analog security IP is of special interest for the Agile Analog team as security has become a critical requirement for every SoC being developed. The company believes that it can offer differentiated anti-tamper solutions that are complementary to other providers of RoT (Root of Trust) and cryptographic engines, delivering value at the subsystem level with their security IP offerings. Another focus area is their data conversion IP solutions. They are working with a strategic customer to deploy their 12 bit ADC on the latest TSMC nodes.

Agile Analog is based in the UK, while Krishna Anne, the CEO is in the valley. 2025 has been another good year of revenue growth at the company. Visit their website for more product information. They have direct sales in US and Europe, with some distributors in Taiwan, Korea and China. Catch up with the Agile Analog team at the GlobalFoundries and TSMC events.

Summary

Analog IP is in high demand, but the older manual methods to hand-craft IP just take too long and require expert experience. Agile Analog has a different approach using their Composa tool to automate the IP creation process, with a library of analog building blocks. What used to take days or weeks of engineering effort now can be accomplished in minutes with this new methodology, significantly reducing the complexity, time and costs associated with traditional analog IP.

Related Blogs

 


Protecting Sensitive Analog and RF Signals with Net Shielding

Protecting Sensitive Analog and RF Signals with Net Shielding
by Admin on 07-21-2025 at 6:00 am

fig1 net shielding 72dpi

By Hossam Sarhan

Communication has become the backbone of our modern world, driving the rapid growth of the integrated circuit (IC) industry, particularly in communication and automotive applications. These applications have increased the demand for high-performance analog and radio frequency (RF) designs.

However, designing analog and RF circuits can be quite challenging due to their sensitivity to various factors. Changes in layout design, operating conditions, and manufacturing processes can all have a significant impact on circuit performance. One of the major hurdles faced by analog designers is the issue of noise coupling between interconnects.

The proximity and interactions between different circuit elements can lead to signal noise, which can degrade the overall circuit performance. This is a critical concern, as analog and RF circuits are more susceptible to proximity effects, such as crosstalk and coupling noise, compared to their digital counterparts.

Mitigating noise coupling with net shielding

One of the widely used techniques to protect critical nets in analog and RF circuit designs is net shielding. This approach involves surrounding the sensitive signal nets with power or ground nets, which create a shielding effect that helps mitigate the impact of electromagnetic interference and crosstalk on the critical signal traces.

The power and ground nets, with their stable and low-noise characteristics, act as a barrier to isolate the critical signals from noise sources. This shielding helps maintain the integrity of the sensitive signals, preventing unwanted noise and disturbances. Figure 1 illustrates net shielding.

Figure 1: Net shielding methodology.

Additionally, the geometries belonging to the same net, when placed in close proximity to each other, can also act as a form of self-shielding. The proximity of the same-net traces creates a shielding effect, further protecting the critical signals from external interference.

By employing net shielding techniques, circuit designers can effectively safeguard the performance and reliability of analog and RF circuits, ensuring that the critical signals are isolated from noise sources and maintain their intended behavior.

Verifying net shielding effectiveness

Verifying the effectiveness of net shielding is not a straightforward task, as it requires tracing the critical net segments and checking the surrounding nets to confirm how much of the victim net is shielded. This process can be time-consuming and error-prone if done manually.

To address this challenge, designers can adopt an advanced reliability verification platform that provides comprehensive net shielding verification. A solution like Calibre PERC from Siemens EDA offers a packaged checks framework for net shielding verification that automates the verification process, streamlining the design validation workflow. This framework permits simple selection and configuration of pre-coded checks, maximizing ease-of-use and minimizing runtime setup. Calibre PERC packaged checks are provided as well dedicated checks to enhance the reliability of analog circuits.

The input for the packaged checks flow is a user configuration file with specified checks and their parameters. This input constraint file is processed by a package manager, which accesses the checks database and creates a rule file containing all of the selected checks, with the proper configuration parameters to run on the designated design. Figure 2 shows the net shielding setup using Calibre PERC packaged checks GUI.

Advance net shielding checks allows designers to specify the critical nets in their design and the minimum shielding percentage threshold required. The verification tool then automatically traces each critical net, analyzes the surrounding shielding nets and calculates the shielding percentage for each net.

The verification results can be viewed and cross-probed in the layout to help with debug.

By leveraging automated net shielding verification, designers can quickly and reliably validate the effectiveness of their net shielding implementation, ensuring that the critical signals are adequately protected from noise sources. This streamlined approach helps designers identify and address any net shielding issues, enhancing the overall reliability and performance of their analog and RF circuits.

The key benefits of using an advanced net shielding verification tool include:

  • Automated verification: The platform’s dedicated net shielding check eliminates the manual and error-prone process of tracing net segments and calculating shielding coverage, saving designers significant time and effort.
  • Streamlined integration: The platform’s packaged checks framework allows designers to easily integrate net shielding verification into their overall design validation flow, enabling them to combine multiple reliability checks into a single validation run.
  • Improved reliability: By quickly and reliably validating the effectiveness of net shielding implementation, the advanced platform helps designers identify and address any issues, ensuring the overall reliability and performance of their sensitive analog and RF circuits.

Conclusion

Protecting critical signals from noise coupling is a crucial aspect of successful analog and RF circuit design. Net shielding is a widely used technique that involves surrounding sensitive signal nets with power or ground nets to create a shielding effect, mitigating the impact of electromagnetic interference and crosstalk.

However, verifying the effectiveness of net shielding can be a challenging task. Fortunately, solutions exist. Designers can easily adopt an advanced reliability verification platform to provide automated and streamlined net shielding verification. With the right tools, designers can quickly identify and address any issues, ultimately enhancing the reliability and performance of their analog and RF designs. Packaged shielding net checks help designers deliver delivering high-quality products that meet the demanding requirements of today’s communication and automotive applications.

About the author:

Hossam Sarhan is a senior product engineer in the Design to Silicon division of Siemens Digital Industries Software, supporting the Calibre PERC reliability platform and Calibre PEX tools. His current work focuses on circuit reliability verification and inductance parasitics extraction. Prior to joining Siemens, he worked in modeling and design optimization for on-chip power management circuits. Hossam received his B.Sc. from Alexandria University, Egypt, his M.Sc. degree from Nile University, Egypt, and his Ph.D. from CEA-LETI, Grenoble, France.

Also Read:

Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis

Jitter: The Overlooked PDN Quality Metric