Banner 800x100 0810

Executive Interview with Matthew Addley

Executive Interview with Matthew Addley
by Daniel Nenni on 07-20-2025 at 10:00 am

Matthew Addley SemiWiki Interview

Matthew Addley is an Industry Strategist at Infor, specializing in the global manufacturing sector. With over 30 years of experience in driving business transformation through technology, he aligns industry needs with Infor’s product strategy through thought leadership, customer engagement, and market insight. Beginning his career in the UK aerospace and defence industry, Matthew now spends much of his time in the Asia Pacific region operating from his home office in Sydney, Australia, bringing a global perspective across mature and emerging markets in ERP, manufacturing and supply chain excellence, and the increasing value of platform technologies.

What are the common supply chain and operational challenges you see among your customers?

Across industries and regions, a recurring theme we hear about is the difficulty of achieving true collaboration throughout the supply chain. Interestingly, the specific pain points can differ depending on where you’re located. For example, in the U.S., customers will often say, “Our suppliers aren’t collaborating with us,” while in Thailand, the sentiment is flipped: “Our customers aren’t collaborating with their suppliers.” The underlying issue (a breakdown in coordinated communication) is consistent, but perceptions of where the problem originates shift depending on regional context.

Another major challenge is the need to respond quickly and efficiently to change. The need for resilience and responsiveness has never been higher, as global supply chains continue to face geopolitical disruptions and lingering fragility from past events. As a result, organizations are under pressure to adapt rapidly to changes in demand, supply shortages, and pricing fluctuations.

At the operational level, one challenge that’s often overlooked, but at the same time is incredibly impactful, is onboarding new employees on the shop floor. We’re seeing a massive generational knowledge shift, where the people with deep knowledge of processes are retiring or moving on, and that knowledge is often left undocumented. It becomes extremely difficult to maintain production efficiency when newer workers are left to figure things out on their own. We deliver enterprise applications to bridge that gap by making processes more visible and repeatable, turning experience into data that everyone can use.

Infor’s How Possible Happens report found that while 75% of global companies surveyed expect 20%+ gains from technology but our evidence suggests that, without the focus on bulletproof processes, agility, and customer-centricity that our solutions provide, many fail to reach their objectives. We partner to help organizations better anticipate and adapt to supply chain disruptions, proving that visibility and agility are more than buzzwords. They’re measurable outcomes.

What specific challenges or use cases have you seen in the semiconductor industry, and how are you helping customers address them?

The semiconductor industry faces unique challenges related to supply chain fragility and component sourcing. One specific issue is ensuring the consistent quality of highly specialized parts across different suppliers. Historically, many manufacturers relied on a single supplier to meet the necessary minimum order quantities. But that approach is becoming increasingly risky.

We enable what we call “true dual sourcing,” which is the ability to proactively manage multiple suppliers for the same part, rather than just defaulting to the one that offers the right quantity. More importantly, we track and manage quality and other performance measures across suppliers so that if a company shifts from one supplier to another, they can establish and maintain confidence in quality. We also allow customers to allocate supply based on historical performance, essentially increasing resilience.

In addition, we track parts beyond just the generic descriptors of form, fit, and function. We capture the manufacturer’s part number, which gives far more granular insight and allows our customers to know whether a part can be used in a highly specific application or only in a generic context. That’s critical in semiconductor manufacturing and downstream activities, where a seemingly identical part from two different sources might not behave the same way. With our system, customers gain the visibility they need to make those nuanced decisions.

One semiconductor manufacturer cited in the How Possible Happens report saw a 40% reduction in time spent on quality-related supplier follow-ups after implementing Infor’s solution, which is a great example of how precise data and supplier insights drive better decision-making.

Where does your solution outperform your competitors?

Where Infor really shines is in operations, especially in areas like production, supply chain planning, and execution.

We often hear from our customers that “our operations are cleaner and better with your solution.” That’s because we’re built with manufacturing and supply chain complexity in mind, not just financial reporting. In fact, our financial modules are strong enough to support global operations, but they don’t need to be over-engineered because we reduce the amount of rework required. We’re able to capture accurate data at the point of production, which flows directly into financial processes, minimizing the need for reconciliation.

The challenge for us is that CFOs are sometimes comfortable with Infor’s competitors. One of our goals is to reassure them that we’re not trying to immediately overhaul everything, especially not their core financial systems. Instead, we often coexist with them initially, while bringing real-time, detailed operational visibility to the production floor. That’s where we outperform: in helping customers operate more efficiently day-to-day.

And customers are seeing the difference: 64% of Infor users report improved operational efficiency within 12 months of go-live, underscoring our ability to drive immediate, meaningful value where it matters most.

How do you ensure flexibility while maintaining a prescriptive product approach?

We take a prescriptive approach where it makes sense, but we know that not every customer fits into a single mold. That’s why we maintain a verticalized product management structure. When a customer comes to us with a unique need, we first ask: “Is this a one-off requirement, or is it something we’re hearing across the industry?” If it’s a common issue, we’ll prioritize building it into the product roadmap. If it’s a one-off, we offer customization through cloud extensibility.

One key advantage of our platform is that customizations don’t break during upgrades. In many legacy ERP systems, custom code can derail an entire upgrade process, forcing customers to rework configurations every 6–12 months. With Infor, upgrades are seamless because we offer a tailored experience without sacrificing agility or incurring high maintenance costs. This is especially important for companies that need to adapt quickly while remaining within budget.

How does your partner ecosystem support customer success across different segments, from SMBs to large enterprises?

Our partner ecosystem is one of our most important assets. We work with a range of partners, from regional experts and boutique consulting firms to global systems integrators like Deloitte. These partners help us deliver localized, industry-specific support to customers of all sizes.

Infor’s CloudSuite solutions play a central role in enabling this success. Built on a multi-tenant cloud architecture, CloudSuite gives businesses of all sizes the ability to scale quickly, respond to market changes with agility, and gain real-time visibility into operations across the enterprise. Our partners are trained to leverage these capabilities to help customers drive faster time-to-value, reduce IT complexity, and improve transparency across the board.

For mid-market and enterprise clients, particularly in multi-tier manufacturing or semiconductor settings, we often operate in a “two-tier” ERP model: running on the shop floor while headquarters uses a different enterprise system. In these cases, our partners help ensure seamless data flow and coordination between the two systems.

For SMBs, our partners play a critical role in delivering fast, cost-effective implementations. These customers often don’t have large IT teams, so our partners step in as both implementers and ongoing advisors, sometimes even serving as virtual CIOs or COOs. The goal is to meet customers where they are and provide the right level of support based on their size, industry, and growth trajectory. And it’s working, with 79% of Infor customers saying that moving to CloudSuite helped them scale more quickly and respond to business changes with greater agility.

What is your approach to incorporating new technologies like AI and machine learning?

We don’t believe in handing customers a generic AI toolkit and saying, “Go figure it out.” Instead, we’re focused on delivering purpose-built, scenario-driven AI solutions that solve specific, tangible problems.

Take contract analysis in the electronics industry, for example. Service terms in these contracts are critical and comparing them manually is time consuming and error-prone. We’re using generative AI to help partners instantly analyze and compare service terms across contracts. This drastically reduces the time and effort required to make informed decisions, particularly in fast-moving environments where speed and accuracy are essential.

Infor Velocity Suite plays a key role in how we enable rapid, value-driven innovation. It provides a foundation of pre-built, industry-specific accelerators and extensible AI capabilities that help customers deploy and scale technology quickly without needing to start from scratch. With Velocity, we’re able to deliver advanced features like AI-driven supply chain planning, inventory optimization, and predictive maintenance in a way that’s tailored to each customer’s industry context.

We always prioritize practical value over hype. We’re not here to sell AI for AI’s sake. We’re here to make it work for our customers—in ways they can deploy today and see results from tomorrow.

Also Read:

CEO Interview with Shelly Henry of MooresLabAI

CEO Interview with Dr. Maksym Plakhotnyuk of ATLANT 3D

CEO Interview with Carlos Pardo of KD


CEO Interview with Jonathan Reeves of CSignum

CEO Interview with Jonathan Reeves of CSignum
by Daniel Nenni on 07-20-2025 at 8:00 am

image015

For more than 30 years, Jonathan has successfully led many start-up ventures, including multiple acquisitions as well as senior operating roles in networking, cloud computing, cybersecurity, and AI businesses.

He co-founded Arvizio, a provider of enterprise AR solutions, was Chairman and co-founder of CloudLink Technologies, which today is part of Dell. He also founded and served as CEO of several networking companies including Sirocco Systems and Sahara Networks.

Tell us about your company?

CSignum’s patented wireless platform is revolutionizing underwater and underground communications by overcoming the limitations of traditional radio, acoustic, and optical systems, unlocking new possibilities for IoT connectivity below the surface.

The company’s flagship product EM-2 product line enables real-time, wireless data transmission from submerged or buried sensors to a nearby surface gateway through many challenging media, including water, ice, soil, rock, and concrete.

The solutions integrate with industry-standard sensors, enabling rapid application deployment, low-maintenance, without the need for surface buoys, pedestals or cables which can clutter natural environments.

What problems are you solving?

CSignum addresses a fundamental connectivity gap by linking data from sensors in submerged and subsurface locations in challenging conditions quickly and easily to the desktop for monitoring and analysis, eliminating the blind spots in critical infrastructure and services.

This opens transformative possibilities for smarter infrastructure, safer operations, and better environmental outcomes on a global scale.

What application areas are your strongest?

CSignum’s strongest application areas are those where reliable, real-time data is needed from environments traditionally considered too difficult or costly to monitor:

  • Water Quality Monitoring: For rivers, lakes, reservoirs, and combined sewer overflows (CSOs), support compliance with evolving environmental regulations.
  • Corrosion Monitoring: For buried pipelines, storage tanks, marine structures, and offshore energy platforms, where monitoring is critical for safety and asset longevity.
  • Under-Vessel Monitoring: Including propeller shaft bearing wear, hull integrity, and propulsion system health for commercial and naval fleets—without dry-docking or through-hull cabling.
  • Urban Infrastructure: Monitoring storm drains, culverts, and wastewater systems in confined spaces.
  • Offshore Wind and Energy: Supporting environmental, structural, and subsea equipment monitoring on and around offshore wind turbines and platforms.
What keeps your customers up at night?

From public water systems and offshore platforms to shipping fleets and underground utilities, our customers are responsible for critical infrastructure. They worry about the impact of not knowing what’s happening below the surface:

  • Missed or delayed detection of environmental incidents, such as sewer overflows, leaks, or pollution events that could lead to regulatory penalties, reputational damage, or public health risks.
  • Undetected equipment degradation, especially corrosion or mechanical wear, that can result in costly failures, downtime, or safety hazards.
  • Gaps in real-time data from buried or submerged infrastructure due to the limits of traditional wireless or cabled systems, particularly in hard-to-access locations.
  • Compliance pressures, especially as governments introduce stricter real-time monitoring and reporting requirements in water, energy, and maritime sectors.
  • Resource constraints: accessing reliable, high-frequency data without adding personnel, vehicles, or costly construction projects.
What does the competitive landscape look like and how do you differentiate?

CSignum is the world’s first commercially viable platform that successfully transmits data through water, ice, soil, and other signal-blocking media, simplifying real-time data collection from the most inaccessible and hazardous locations, reducing risk and cost. No other solution currently achieves this.

The innovation and differentiation lie not just in the core technology but in the range of applications it unlocks: water quality monitoring, corrosion detection in submerged pipelines, tracking structural health of marine infrastructure, and enabling communications in ice-covered or disaster-prone environments.

What new features/technology are you working on?

CSignum is scaling its platform for widespread adoption across water and other utilities, maritime, energy infrastructure, defense, and environmental monitoring, especially through partnerships.

One area of expansion includes under-vessel systems monitoring, where CSignum’s technology enables wireless measurement of propeller shaft bearing wear and propulsion system health, all without the need for through-hull cabling or dry dock access.

In parallel, we will expand our EM-2 product family, launching next-gen models with longer battery life, smaller form factor, enhanced analytics, and plug-and-play compatibility with leading sensor systems. The CSignum Cloud platform will evolve into a hub for predictive diagnostics, anomaly detection, and digital twin integration.

How do customers normally engage with your company?

We work closely with customers to understand the physical constraints, data requirements, and operational goals of their environment.

From there, we guide them through a proof-of-concept or pilot deployment, leveraging our modular EM-2 systems and integrating with their existing sensors or preferred platforms.

Customers value our deep technical support, application expertise, and the flexibility of a platform that requires no cabling, no trenching, and minimal site disruption.

Contact CSigmun

Also Read:

CEO Interview with Yannick Bedin of Eumetrys

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog


Podcast EP298: How Hailo is Bringing Generative AI to the Edge with Avi Baum

Podcast EP298: How Hailo is Bringing Generative AI to the Edge with Avi Baum
by Daniel Nenni on 07-18-2025 at 10:00 am

Dan is joined by Avi Baum, Chief Technology Officer and Co-Founder of Hailo, an AI-focused chipmaker that develops specialized AI processors for enabling data-center-class performance on edge devices. Avi has over 17 years of experience in system engineering, signal processing, algorithms, and telecommunications while focusing on wireless communication technologies for the past 10 years.

Dan explores the breakthrough AI processors Hailo is developing. These devices enable high performance deep learning applications on edge devices. Hailo processors are geared toward the new era of generative AI on the edge. Avi describes the impact generative AI on the edge can have by enabling perception and video enhancement through Hailo’s wide range of AI accelerators and vision processors. He discusses how security and privacy can be enhanced with these capabilities as well as the overall impact on major markets such as automotive, smart home and telecom.

Contact Halio

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Shelly Henry of MooresLabAI

CEO Interview with Shelly Henry of MooresLabAI
by Daniel Nenni on 07-18-2025 at 6:00 am

image001 (3)

Shelly Henry is the CEO and Co-Founder of MooresLabAI, bringing over 25 years of semiconductor industry experience. Prior to founding MooresLabAI, Shelly led silicon teams at Microsoft and ARM, successfully delivering chips powering billions of devices worldwide. Passionate about driving efficiency and innovation, Shelly and his team at MooresLabAI are transforming chip development through specialized AI-driven automation solutions.

Tell us about your company.

MooresLabAI, founded in 2025, is transforming semiconductor development using specialized AI automation. With our platform, chip design teams can accelerate their schedules by up to 7x and cut pre-fabrication costs by 86%. We integrate seamlessly into existing workflows, helping semiconductor companies rapidly deliver reliable silicon.

What problems are you solving?

Semiconductor design is notoriously expensive and slow – verification alone can cost tens of millions and take months of engineering effort. Our VerifAgent™ AI platform automates and dramatically accelerates these verification processes, reducing human error and addressing the critical talent shortage facing the industry.

What application areas are your strongest?

Our strongest traction is with companies designing custom AI, automotive, and mobile chips. Our early adopters include major NPU providers and mobile chipset developers who are already seeing impressive productivity gains and significant reductions in costly errors.

What keeps your customers up at night?

They worry about verification delays, costly re-tapeouts, and stretched engineering resources. With MooresLabAI, our customers experience significantly faster verification cycles, fewer late-stage bugs, and can do more with existing resources, easing these critical pain points.

What does the competitive landscape look like and how do you differentiate?

Many current AI tools provide general assistance but are not built specifically for semiconductor workflows. MooresLabAI uniquely offers end-to-end, prompt-free automation designed explicitly for silicon engineering. We seamlessly integrate with all major EDA platforms and offer secure, flexible deployment options, including on-premises solutions.

What new features/technology are you working on?

We are expanding beyond verification to offer complete end-to-end chip development automation—from architecture and synthesis to backend physical design, firmware generation, and full SoC integration. Our modular AI-driven platform aims to cover the entire silicon lifecycle comprehensively.

How do customers normally engage with your company?

Customers typically start with our pilot programs, which clearly demonstrate value with minimal initial effort. Successful pilots transition smoothly into subscription-based engagements, with flexible licensing options tailored to customer needs. For those hesitant about immediate adoption, we also offer verification services to quickly address specific project needs.

Contact MooresLabAI

Also Read:

CEO Interview with Dr. Maksym Plakhotnyuk of ATLANT 3D

CEO Interview with Carlos Pardo of KD

CEO Interview with Darin Davis of SILICET

CEO Interview with Peter L. Levin of Amida


New Cooling Strategies for Future Computing

New Cooling Strategies for Future Computing
by Daniel Payne on 07-17-2025 at 10:00 am

thermal panel dac min

Power densities on chips increased from 50-100 W/cm2 in 2010 to 200 W/cm2 in 2020, creating a significant challenge in removing and spreading heat to ensure reliable chip operation. The DAC 2025 panel discussion on new cooling strategies for future computing featured experts from NVIDIA Research, Cadence, ESL/EPFL, the University of Virginia, and Stanford University. I’ll condense the 90-minute discussion into a blog.

Four techniques to mitigate thermal challenges were introduced:

  • Circuit Design, Floor Planning, Place and Route
  • Liquid or Cryogenic Cooling
  • System-level, Architectural
  • New cooling structures and materials

For circuit design, there are temperature-aware floor planning tools, PDN optimization, temperature-aware TSVs, and the use of 2.5D chiplets. Cooling has been done with single-phase cold plates, inter-layer liquid cooling, 2-phase cooling, and 150 – 70k cooling. System-level approaches are advanced task mapping, interleaving memory and compute blocks, and temperature-aware power optimization. The new cooling structures and materials involve diamonds, copper nanomesh, and even phase-change materials.

John Wilson from NVIDIA talked about a 1,000X increase in single chip AI performance in FLOPS over just 8 years, going from the Pascal series to the Blackwell series. Thermal design power has gone from 106W in 2010 to 1,200W in 2024. Data centers using Blackwell GPUs use liquid cooling to attain a power usage efficiency (PUE) of 1.15 to 1.2, providing a 2X reduction in overhead power. At the chip-level, small hotspots cause heat to spread quickly, while heat spreads slowly for larger hotspots. GPU temperatures depend on the silicon carrier thickness and the type of stacking. Stack-up materials such as diamond and silicon carbide impacted thermal characteristics.

A future cooling solution is using silicon microchannel cold plates.

Jamil Kawa said that the energy needs for AI-driven big data compute farm already exceeds our projected power generation capacity through 2035 to the point that Microsoft revived the nuclear reactor at 3-miles Island for their compute farms / Data Center energy needs. It is not a sustainable path . A lower energy consumption per instruction (or per switching bit) is needed. Cold computing provides this answer even after all cooling costs are taken into account. There are alternative technologies that are very energy efficient at cryo temperatures such as Josephson Junction based superconducting electronics operated at < 4K, but they have major limitations, the main part being area per function which is greater than 1000 that of CMOS. Therefore, CMOS technology operated in a liquid nitrogen environment with an operating range of 77K to 150K is the answer. The cooling costs of using liquid nitrogen are offset by the dramatically lower total power to be dissipated at an iso-performance. Operating CMOS in that range allows operation at a much lower VDD (power supply) for iso-performance that is generating much less heat to dissipate,

David Atienza, a professor at EPFL, talked about quantum computing, superconducting, and HPC challenges. He said the temperatures for superconducting circuits used in quantum computing are in the low Kelvin range. Further, for an HPC chip to be feasible, dark silicon is required to reduce power. Microsoft plans to restart the Three Mile Island power plant to power its AI data center. Liquid nitrogen can be used to lower the temperature and increase the speed of CMOS circuits. Some cold CMOS circuits can run at just 350 mV for VDD to manage power.

Albert Zeng, Sr. Software Engineering Group Director at Cadence, said they have thermal simulation software for the whole stack, starting with Celsius Thermal Solver used on chips, all the way up to the Reality Digital Twin Platform. Thermal analysis started in EDA with PCB and packages and is now extending into 3D chips. In addition to thermal for 3D, there are issues with stress and the impact of thermal on timing and power, which require a multi-physics approach.

Entering any data center requires wearing headphones to protect against the loud noise of the cooling system fans. A system analysis for cooling capacity is necessary, as the GP200-based data centers require liquid cooling, and the AI workloads are only increasing over time.

Mircea Stan, a professor from VMEC, presented ideas on limiting heat generation and efficiently removing heat. The 3D-IC approach creates limitations on power delivery and thermal walls. Voltage stacking can be used to break the power delivery wall, and microfluidic cooling will help break the thermal wall.

VMEC has created an EDA tool called Hotspot 7.0 that models and simulates a die thermal circuit, a microfluidic thermal circuit, and a pressure circuit.

Srabanti Chowdhury from Stanford University talked about maximizing device-level efficiency that scales to the system level. Initially, heat sinks and package fins were used for 2D processors to manage thermal issues. An ideal thermal solution would spread heat within nm of any hotspot, spreading heat both laterally and vertically, and integrate with the existing semiconductor processing materials. Their research has shown that using diamond 3D thermal scaffolding is a viable technique for 3D-IC thermal management.

Stanford researchers have been developing this diamond thermal dielectric since 2016 and are currently in the proof-of-concept phase.

Q&A

Q:  What about security and thermal issues?

A: Jamil – Yes, thermal hacking is a security issue, and they’re using schemes to read secret data, so there are side channel mitigation techniques.

Q: Is there a winning thermal technology?

Liquid cooling on the edge is coming, but the overhead of microfluidic needs to be beneficial.

Q: Can we do thermal management with AI engines?

Albert – We can use AI models to help with early design power estimates and thermal of ICs. The data centers are designed with models of the servers, where AI models are used to control the workloads.

Q: Can we convert heat into something useful on the chip?

A: Heat energy cannot be extracted into something useful, sorry, the conversion is not practical, because we cannot convert heat into electricity without increasing the heat even higher.

Q: What about keeping the temperature more constant?

Srabanti – The workloads are too variable to moderate the temperature swings. A change in materials to moderate heat is more viable.

Q: What is the future need for thermal management?

A: Jamil – Our energy needs today already exceed our projected energy generation capacity and therefore the solution is to generate much less energy at a particular performance level by operating at much lower power supply and lower temperature.  A study done on a liquid nitrogen cooled GPU at 77K had cooling costs on par with forced cooled air but with a 17% performance advantage.

Q: What about global warming?

A: Liquid nitrogen is at 77K, a sweet spot to use. Placing a GPU in liquid nitrogen vs forced air has about the same cost, but with improved performance.

Q: For PUE metrics, what about the efficiency of servers per workload?

Albert – PUE should be measured at full workload capacity.

Q: Have you tried anything cryogenic?

John – Not at NVIDIA yet.

Related Blogs


DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring
by Mike Gianfagna on 07-17-2025 at 6:00 am

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

As AI models grow exponentially, the infrastructure supporting them is struggling under the pressure. At DAC, one company stood out with a solution that doesn’t just monitor chips, it empowers them to adapt in real time to these new workload requirements.

Unlike traditional telemetry or post-silicon debug tools, proteanTecs embeds intelligent agents directly into the chip, enabling real-time, workload-aware insights that drive adaptive optimization. Let’s examine how proteanTecs unlocks AI hardware scaling with runtime monitoring.

What’s the Problem?

proteanTecs recently published a very useful white paper on the topic of how to scale AI hardware. The first paragraph of that piece is the perfect problem statement. It is appropriately ominous.

The shift to GenAI has outpaced the infrastructure it runs on. What were once rare exceptions are now daily operations: high model complexity, non-stop inference demand, and intolerable cost structures. The numbers are no longer abstract. They’re a warning.

Here are a few statistics that should get your attention:

  • Training a model like GPT-4 (Generative Pre-trained Transformer) reportedly consumed 25,000 GPUs over nearly 100 days, with costs reaching $100 million. GPT-5 is expected to break the $1 billion mark
  • Training GPT-4 drew an estimated 50 GWh, enough to power over 23,000 U.S. homes for a year. Even with all that investment, reliability is fragile. A 16,384-GPU run experienced hardware failures every three hours, posing a threat to the integrity of weeks-long workloads
  • Inference isn’t easier. ChatGPT now serves more than one billion queries daily, with operational costs nearing $700K per day.

The innovation delivered by advanced GenAI applications can change the planet, if it doesn’t destroy it (or bankrupt it) first.

What Can Be Done?

Uzi Baruch

During my travels at DAC, I was fortunate to spend some time talking about all this with Uzi Baruch, chief strategy officer at proteanTecs. Uzi has over twenty years of software and semiconductor development and business leadership experience, managing R&D and product teams and high scale projects at leading, global high technology companies. He provided a well-focused discussion about a practical and scalable approach to tame these difficult problems.

Uzi began with a simple observation. The typical method to optimize a chip design is to characterize it across all operating conditions and workloads and then develop design margins to keep power and performance in the desired range. This approach can work well for chips that operate in a well characterized, predictable envelope. The issue is that AI, and in particular generative AI applications are not predictable.

Once deployed, the workload profile can vary immensely based on the scenarios encountered. And that dramatically changes power and performance profiles while creating big swings in parameters such as latency and data throughput. Getting it all right a priori is like reliably predicting the future, a much sought after skill that has eluded the finest minds in history.

He went on to point out that the problem isn’t just for the inference itself. The training process faces similar challenges. In this case, wild swings in performance and power demands can cause failures in the process and wasteful energy consumption. If not found, these issues manifest as unreliable, inefficient operation in the field.

Uzi went on to discuss the unique approach proteanTecs has taken to address these very real and growing problems. He described the use of technology that delivers workload-aware real-time monitoring on chip. Thanks to very small, highly efficient on-chip agents, parametric measurements – in-situ and in functional mode – are possible. The system detects timing issues, operational and environmental effects, aging and application stress. Among the suite of Agents are the Margin Agents that monitor timing margins of millions of real paths for more informed decisions. And all of this is tied to the actual instructions being executed by the running workloads.

The proteanTecs solution monitors the actual conditions the chip is experiencing from the current workload profile, analyzes it and reacts to it to optimize the reliability, power and performance profile. All in real time. No more predicting the future but rather monitoring and reacting to the present workload.

A reasonable question here is what is the overhead of such a system? I asked Uzi and he explained that area overhead is negligible as the monitors are very small and can typically be added in the white space of the chip. The gate count overhead is about 1 – 1.5 percent, but the power reduction can be 8 – 14 percent. The math definitely works.

I came away from my discussion with Uzi believing that I had seen the future of AI, and it was brighter than I expected.

At the proteanTecs Booth

Noam Brousard

While visiting the proteanTecs booth at DAC I had the opportunity to attend a presentation by Noam Brousard, VP of Solutions Engineering at proteanTecs. Noam has been with the company for over 7 years and has a rich background in systems engineering for over 25 years at companies such as Intel and ECI Telecom.

Noam provided a broad overview of the challenges presented by AI and the unique capabilities proteanTecs offers to address those challenges. Here are a couple of highlights.

He discussed the progression from generative AI to artificial general intelligence to something called artificial superintelligence. These metrics compare AI performance to that of humans. He provided a chart shown below that illustrates the accelerating performance of AI across many activities. When the curve crosses zero, AI outperforms humans. Noam pointed out that there will be many more such events in the coming months and years. AI is poised to do a lot more, if we can deliver these capabilities in a cost and power efficient way.

Helping to address this problem is the main focus of proteanTecs. Noam went on to provide a very useful overview of how proteanTecs combines its on-chip agents with embedded software to deliver complete solutions to many challenging chip operational issues. The figure below summarizes what he discussed.  As you can see, proteanTecs solutions cover a lot of ground that includes dynamic voltage scaling with a safety net, performance and health monitoring, adaptive frequency scaling, and continuous performance monitoring. It’s important to point out these applications aren’t assisting with design margin strategy but rather they are monitoring and reacting to real-time chip behavior.

About the White Paper

There is now a very informative white paper available from proteanTecs on the challenges of AI and substantial details about how the company is addressing those challenges. If you work with AI, this is a must-read item. Here are the topics covered:

  • The Unforgiving Reality of Scaling Cloud AI
  • Mastering the GenAI Arms Race: Why Node Upgrades Aren’ Enough
  • Critical Optimization Factors for GenAI Chipmakers
  • Maximizing Performance, Power, and Reliability Gains with Workload-Aware Monitoring On-Chip
  • proteanTecs Real-Time Monitoring for Scalable GenAI Chips
  • proteanTecs AVS Pro™ – Dominating PPW Through Safer Voltage Scaling
  • proteanTecs RTHM™ – Flagging Cluster Risks Before Failure
  • proteanTecs AFS Pro™ – Capturing Frequency Headroom for Higher FLOPS
  • System-Wide Workload and Operational Monitoring
  • Conclusion

To Learn More

You can get your copy of the must-read white paper here: Scaling GenAI Training and Inference Chips with Runtime Monitoring. The company also issued a press release recently that summarizes its activities in this important area here.  And if all this gets your attention, you can request a demo here. And that’s how proteanTecs unlocks AI hardware growth with something called runtime monitoring.

Also Read:

Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs

proteanTecs at the 2025 Design Automation Conference #62DAC

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing


U.S. Imports Shifting

U.S. Imports Shifting
by Bill Jewell on 07-16-2025 at 2:00 pm

US Smartphone Imports 2025 SemiWiki

Our Semiconductor Intelligence June Newsletter showed how U.S. imports of smartphones have been on a downward trend since January 2025, led by China. Other key electronic products have also experienced sharp drops in U.S. imports from China.F

U.S. smartphone imports in May 2025 were $3.03 billion, up slightly from April but down more than 50% from January and February 2025. May smartphone imports from China were down 94% from February. India and Vietnam are now the two largest sources of smartphone imports. Apple has been shifting much of its iPhone production to India from China. Samsung does most of its smartphone manufacturing in Vietnam.

U.S. imports of laptop PCs have been relatively stable from January 2025 to May 2025, averaging about $4 billion a month. However, imports from China dropped 90% from January to May. Vietnam displaced China as the largest source of U.S. laptop imports, with imports up 147% from January to May. Dell and Apple produce many of their laptop PCs in Vietnam and HP is expanding production in Vietnam.

Television imports to the U.S. have been fairly steady in February through May 2025, averaging about $1.3 billion a month. As with smartphones and laptop PCs, China’s exports to the U.S. have dropped sharply, with a 61% decline from January to May. TV imports from Mexico declined 40% from January to April but picked up 29% in May. Vietnam is becoming a significant source of TV imports, as it has with smartphones and PCs. U.S. TV imports from Vietnam grew 66% from January to May.

Currently, the U.S. does not impose tariffs on imports of smartphones or computers. However, in May President Trump threatened a 25% tariff on smartphones to be implemented by the end of June. As of mid-July, no smartphone tariff has been implemented.

U.S. imports from Mexico and Canada are subject to a 25% tariff. Goods covered under the USMCA are exempt, which includes electronics. Vietnam is one of only two countries with a new trade agreement with the U.S. in place (the other is the U.K.) and is now subject to a 20% tariff.

China is currently subject to a minimum 10% tariff under a 90-day truce. Product-specific tariffs bring China’s effective tariff rate above 30%. If no agreement is reached, the minimum tariff rate will be 34% on August 12, 2025. The Trump administration sees China as the primary target for tariffs and has threated rates as high as 125%. China has been reducing its exports to the U.S. to avoid punitive tariffs. The U.S. and China are currently in trade talks. Even if a reasonable tariff rate is reached, the damage has been done.

What is the outlook for U.S. electronics consumption? The U.S. has shifted to other countries to make up for the declines in imports from China for laptop PCs and TVs. However, other countries have not yet made up for the severe decline in smartphone imports from China. U.S. smartphone manufacturing is practically non-existent. Only one company, Purism, assembles smartphones in the U.S. Purism has only sold a total of tens of thousands of phones in the last six years in a U.S. market of over 100 million smartphones sold annually. Its Liberty phone sells for $1,999, about twice the price of a high-end iPhone.

IDC estimates global smartphone shipments were 295 million units in 2Q 2025, down 2% from 1Q 2025 and down 1% from a year ago. U.S. smartphone shipments have not been released but will likely show a substantial drop in 2Q 2025 from 1Q 2025 unless sellers have inventory to make up for the shortage in supply. Based on current trends, the U.S. should see a shortage in smartphones in the second half of 2025.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

Electronics Up, Smartphones down

Semiconductor Market Uncertainty

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025


Sophisticated soundscapes usher in cache-coherent multicore DSP

Sophisticated soundscapes usher in cache-coherent multicore DSP
by Don Dingee on 07-16-2025 at 10:00 am

A Tensilica 2 to 8 core SMP DSP adds cache-coherence for high-end audio processing and other applications

Digital audio processing is evolving into an art form, particularly in high-end applications such as automotive, cinema, and home theater. Innovation is moving beyond spatial audio technologies to concepts such as environmental correction and spatial confinement. These sophisticated soundscapes are driving a sudden increase in digital signal processing (DSP) performance demands, including the use of multiple DSPs and AI inference in applications. The degree of difficulty in programming multiple DSPs for coordinated audio processing has outweighed the benefits – until now, with the introduction of a cache-coherent multicore DSP solution. Cadence’s Prakash Madhvapathy, Product Marketing Director for HiFi DSP IP, spoke with us about what is changing and how cache-coherent DSPs can unlock new applications.

Sounds that fill – or don’t fill – a space

Spatial audio schemes enable each sound source to possess unique intensity and directionality characteristics, and multiple sources can move through the same space simultaneously. The result can be a highly realistic, immersive 3D experience for listeners in a pristine environment. “Ideally, you’d be able to create your listening environment, say for a soundbar, or headphones, maybe earbuds, or your car,” says Madhvapathy. “You might want to enhance sources, or reduce or remove them altogether.” (For instance, in my home, we have a Bose 5.1 soundbar without a subwoofer because my wife has heightened bass sensitivity.)

Few listening spaces are pristine, however. “Noise can take multiple forms, and they are not always statistically static; they keep changing,” Madhvapathy continues. There can be keyboard clicking, other conversations, traffic noise, and more noise sources. Noise reduction is becoming increasingly complex because what’s noise to one listener might be an important conversation to another, both of whom are hearing sounds in the same space. “Traditional DSP use cases can deliver some noise reduction, but AI is evolving to handle more complex reduction tasks where sources are less predictable than, say, a constant background hum.” AI may also play a role in adapting sound for the space, handling dead spots or reverberations.

High-end automotive sound processing is also becoming much more sophisticated. Some of the latest cars deploy as many as 24 speakers to create listening zones. What the driver hears may be entirely different from what a passenger hears, as cancellation technology provides spatial confinement for the sound each listener experiences, or “sound bubbles” as Madhvapathy affectionately refers to them. “The complexity of different zones in a vehicle can make it difficult to update all of them when using distributed audio processing,” he observes. “The other problem is concurrency – music, phone calls, traffic noise reduction, conversation cancellation, everything has to happen simultaneously and seamlessly, otherwise sound quality suffers for some or all listeners.”

Low-power, high-performance DSPs built on mature cores

Audio processing demand is skyrocketing, and Cadence has turned to a familiar, yet strikingly new solution to increase DSP performance. “Previously, we were able to add performance to one of our HiFi family of DSPs and create enough headroom for customers to meet their audio requirements in a single processor,” says Madhvapathy. “Suddenly, customers are asking for four, six, or eight times the performance they had in our previous HiFi generations to deal with new DSP and AI algorithms.” Multicore has evolved from a DSP architecture that most designers avoided to one that is now essential for competing in the high-end audio market.

The latest addition is the Cadence Tensilica Cache-Coherent HiFi 5s SMP, a symmetric multiprocessor subsystem built on top of their eighth-generation Xtensa core, with additional SIMD registers and DSP blocks incorporated. “Cache-coherence is not a new concept in computer science by any means, but it’s now taking shape in DSP form with the HiFi 5s SMP,” he continues. “Overbuying cores is a problem when attempting hard partitioning of an application across cores, which rarely turns out to be sized correctly. With the HiFi 5s SMP, there’s a shared, cached memory space that all cores can access, and cores can scale up or down for your needs, so there is less wasted energy and cost, and programming is far easier.”

Audio applications gain more advantages. Microphones and speakers can tie into a single processing block with the right amount of DSP cores and memory. The HiFi 5s DSP cores offer multi-level interrupt handling for real-time prioritization of tasks running in FreeRTOS or Zephyr. They also accommodate power management, including three levels of power shut-off options and clock frequency scaling.

Madhvapathy concludes with a couple of interesting observations. While short life cycles are familiar in consumer devices like soundbars and earbuds, he’s seeing a drastic shortening of life cycles in automotive audio design, with features refreshed every two or three years to remain competitive. Scalability and cache coherence not only make software more straightforward, but they also simplify testing and reduce failures, with fewer instances of cache-related anomalies that don’t appear until designs are in the field and customers are dissatisfied.

Designers are just beginning to imagine what is possible in these sophisticated soundscapes, and the arrival of more DSP performance, along with ease of programming and scalability, is timely.

Learn more online about the Cadence Cache-Coherent HiFi 5s SMP for high-end audio processing:

News: Cadence Launches Cache-Coherent HiFi 5s SMP for Next-Gen Audio Applications
Product page: Cache-Coherent HiFi 5s Symmetric Multiprocessor
White paper: Cache-Coherent Symmetric Multiprocessing with LX8 Controllers on HiFi DSPs


A Quick Look at Agentic/Generative AI in Software Engineering

A Quick Look at Agentic/Generative AI in Software Engineering
by Bernard Murphy on 07-16-2025 at 6:00 am

2e07e93d c4b2 45cd 9338 6faabedd5052

Agentic methods are hot right now since single LLM models seem limited to point tool applications. Each such application is impressive but still a single step in the more complex chain of reasoning tasks we want to automate, where agentic methods should shine. I have been hearing that software engineering (SWE) teams are advancing faster in AI adoption than hardware teams so thought it would be useful to run a quick reality check on status. Getting into the spirit of this idea I used Gemini Deep Research to find sources for this article, selectively sampling a few surveys it offered while adding a couple of my own finds. My quick summary is first that what counts as progress depends on the application: convenience-based use-models are more within reach today, precision use-models are also possible but more bounded. And second, advances are more evident in automating subtasks subject to a natural framework of crosschecks and human monitoring, rather than a hands-free total SWE objective.

Automation for convenience

One intriguing paper suggests that we should move away from apps for convenience needs towards prompt-based queries to serve the same objectives. This approach can in principle do better than apps because prompt-based systems eliminate need for app development, can be controlled through the language we all speak without need for cryptic human-machine interfaces, and can more easily adapt to variations in needs.

Effective prompt engineering may still be more of an art than we would prefer, but the author suggests we can learn how to become more effective and (my interpretation) perhaps we only need to learn this skill once rather than for every unique app.

Even technology engineers need this kind of support, not in deep development or analysis but in routine yet important questions: “who else is using this feature, when was it most recently used, what problems have others seen?” Traditionally these might be answered by a help library or an in-house data management app, but what if you want to cross your question with other sources or constraints outside the scope of that app? In hardware development imagine the discovery power available if you could do prompt-based searches across all design data – spec, use cases, source code, logs, waveforms, revisions, etc, etc.

Automating precision development

This paper describes an agentic system to develop quite complex functions including a face recognition system, a chat-bot system, a face mask detection tool, a snake game, a calculator, and a Tic-Tac-Toe game, using an LLM-based agentic system with agents for management, code generation, optimization, QA, iterative refinement and final verification. It claims 85% or better code accuracy against a standard benchmark, building and testing these systems in minutes. At 85% accuracy, we must still follow that initial code with developer effort to verify and correct to production quality. But assuming this level of accuracy is repeatable, it is not hard to believe that even given a few weeks or months of developer testing and refinement, the net gain in productivity without loss of quality can be considerable.

Another paper points out that in SWE there is still a trust issue with automatically developed code. However they add that most large-scale software development is more about assembling code from multiple sources than developing code from scratch. Which changes the trust question to how much you can trust components and assembly. I’m guessing that they consider assembly in DevOps to be relatively trivial, but in hardware design SoC-level assembly (or even multi-die system assembly) is more complex though still primarily mechanical rather than creative. The scope for mistakes is certainly more limited than it would be in creating a complete new function from scratch. I know of an AI-based system from over a decade ago which could create most of the integration infrastructure for an SoC – clocking, reset, interrupt, bus fabric, etc. This was long before we’d heard of LLMs and agents.

Meanwhile, Agentic/Generative AI isn’t only useful for code development. Tools are appearing to automate test design, generation and execution, for debug, and more generally for DevOps. Many of these systems in effect crosscheck each other and are also complemented by human oversight. Mistakes might happen but perhaps no more so than in an AI-free system.

Convenience, precision or a bit of both?

Engineers obsess about precision, especially around AI. But much of what we do during our day doesn’t require precision. “Good enough” answers are OK if we can get them quickly. Search, summarizing key points from an email or paper, generating a first draft document, these are all areas where we depend on (or would like) the convenience of a quick and “good enough” first pass. On the other hand, precision is vital in some contexts. For financial transactions, jet engine modeling, logic simulation we want the most accurate answers possible, where “good enough” isn’t good enough.

Even so, there can still be an advantage for precision applications. If AI can provide a good enough starting point very quickly (minutes) and if we can manage our expectations by accepting need to refine and verify beyond that starting point, then the net benefit in shortened schedule and reduced effort may be worth the investment. As long as you can build trust in the quality the AI system can provide.

Incidentally, my own experience (I tried Deep Research (DR) options in Gemini, Perplexity and Chat GPT) backs up my conclusions. Each DR analysis appeared in ~10 minutes, mostly useful to me for the references they provided rather than the DR summaries. Some of these references were new to me, some I already knew. That might have been enough if my research was purely for my own interest. But I wanted to be more accurate since I’m aiming to provide reliable insight, so I also looked for other references through more conventional on-line libraries. Combining both methods proved to be productive!


Improve Precision of Parasitic Extraction for Digital Designs

Improve Precision of Parasitic Extraction for Digital Designs
by Admin on 07-15-2025 at 10:00 am

fig1 pex process

By Mark Tawfik

Parasitic extraction is essential in integrated circuit (IC) design, as it identifies unintended resistances, capacitances, and inductances that can impact circuit performance. These parasitic elements arise from the layout and interconnects of the circuit and can affect signal integrity, power consumption, and timing. As IC designs shrink to smaller nodes, parasitic effects become more pronounced, making accurate extraction crucial for ensuring design reliability. By modeling these effects, designers can adjust their circuits to maintain performance, avoid issues like signal delays or power loss, and achieve successful design closure.

What is parasitic extraction

In semiconductor design, parasitic elements—like resistances, capacitances, and inductances—are unintended but inevitable components that emerge during the physical fabrication of integrated circuits (ICs). These elements are a result of the materials used and the complexity of the fabrication process. Although not part of the original design, parasitic elements can significantly impact circuit performance. For example, parasitic resistances can cause voltage drops and increased power dissipation, while parasitic capacitances can lead to signal delays, distortions, and crosstalk between adjacent wires. Additionally, interconnect parasitic introduce propagation delays that can affect the timing and signal integrity, leading to higher power consumption and reduced overall performance.

Parasitic extraction is a critical process in IC design that identifies and models these unintended parasitic effects to ensure reliable performance. In digital design, parasitic extraction relies heavily on standardized formats like LEF (Library Exchange Format) and DEF (Design Exchange Format), which describe both the logical and physical aspects of the design (figure 1).

Figure 1. Parasitics are extracted from the physical and logical information about the design.

The parasitic extraction process typically follows these key steps:

  • Data preparation: This step involves assembling and aligning the logical and physical design data, usually sourced from LEF and DEF files. The purpose is to ensure each logical component is correctly mapped to its corresponding physical location in the layout, ensuring accurate connectivity for the parasitic extraction process.
  • Extraction: During extraction, parasitic components such as resistances, capacitances, and interconnects are identified and captured from the design layout and technology data. This forms the basis for understanding how these parasitic elements might impact the overall performance of the circuit.
  • Reduction: Once parasitic elements are extracted, they are simplified using models such as distributed RC or lumped element models. These models condense the parasitic data, making it easier to manage while still accurately reflecting the parasitic effects for simulation and analysis.
  • Verification: After extraction, the data is subjected to verification. This involves comparing the parasitic data with design specifications and simulation results to ensure it aligns with the expected circuit performance and complies with necessary design rules and criteria for sign-off.
  • Optimization: After verifying the parasitics, designers can apply various optimization techniques to reduce their negative impact on the circuit. This can include refining routing paths, adding buffers, or making other adjustments to improve performance, timing, power consumption, and signal integrity.

Accurate parasitic extraction is crucial for successful IC design, particularly as technology advances and parasitic effects become more pronounced. By systematically modeling, verifying, and optimizing these effects, designers can ensure that their circuits perform reliably and meet required specifications during fabrication and final production.

Analog and digital design flows

Analog and digital design flows are two distinct approaches in semiconductor design, each suited to the specific requirements of analog and digital integrated circuits (ICs). Analog design deals with circuits that process continuous signals, such as amplifiers, filters, and analog-to-digital converters (ADCs). Precision is crucial in these circuits to minimize noise, distortion, and power consumption. Designers face challenges like balancing trade-offs between power efficiency and noise reduction, requiring manual layout adjustments to avoid performance issues caused by small variations. Tools like SPICE simulators help model circuit behavior under different conditions to ensure reliability and performance. Analog circuits are highly sensitive to their physical layout and are thoroughly tested in different operating conditions.

On the other hand, digital design focuses on circuits that use binary signals (0s and 1s) and components such as logic gates, flip-flops, and various types of logic circuits. Digital design prioritizes speed, energy efficiency, and resistance to noise, relying more on automation and standardized components to streamline the process. Tools like Verilog and VHDL allow designers to define the circuit’s behavior, which is then automatically synthesized into a layout. Digital workflows make use of timing analysis, logic simulation, and verification tools to ensure the circuit operates correctly and meets performance requirements. While digital circuits can be complex, their binary nature allows for more straightforward layouts compared to analog circuits.

However, as technology advances and node sizes shrink, both analog and digital designs face new challenges. Analog designs must deal with increased noise sensitivity and parasitic effects, while digital designs need to address timing, power consumption, and signal integrity issues at higher circuit densities. Despite these complexities, modern design tools and methods help ensure that ICs meet the required performance, power, and reliability standards. Both design flows play critical, complementary roles in IC development, with analog design focusing on precision and manual adjustments, and digital design emphasizing automation and efficiency. Designers in both areas must navigate intricate trade-offs to produce high-performance, reliable ICs in a rapidly advancing technological environment.

Parasitic extraction tools

Parasitic extraction tools for semiconductor design are generally divided into three main categories: field solver-based, rule-based extraction and pattern matching, each with its own strengths and suited for different design requirements (figure 2).

Figure 2. Software tools used for parasitic extraction are traditionally field-solver or rule-based tools. Pattern matching is a newer technique.

Field solvers. Field solver-based approaches use numerical techniques to solve electromagnetic field equations, such as Maxwell’s equations, which allow them to model complex geometries and interconnects with a high degree of accuracy. These methods excel in capturing distributed parasitic, making them particularly useful for designs where detailed insights into electromagnetic phenomena are crucial. This precision is essential for high-frequency circuits, radio frequency (RF) designs, and other advanced applications that demand a deep understanding of parasitic effects to ensure performance integrity. However, the trade-off with field solver methods is their computational intensity. Since they solve complex mathematical equations across fine geometric details, they require significant computational resources and time, especially when applied to large-scale designs. This limits their widespread use in routine workflows, relegating them mostly to specialized tasks where the highest level of accuracy is a necessity.

Rule-based. Rule-based extraction tools, in contrast, operate on predefined models and design guidelines, which allow them to estimate parasitic elements in a quicker and more scalable manner. These tools rely on established rules derived from previous simulations and physical laws, applying them across the design layout to extract parasitic. Although rule-based methods may not capture the same level of fine detail as field solvers, they are highly efficient, offering much faster extraction times and the ability to handle larger, more complex designs without overwhelming computational resources. This makes them the preferred option for most digital and analog IC design workflows, where designers prioritize a balance between speed, accuracy, and scalability. Rule-based tools are particularly well-suited for mainstream applications, where the trade-offs in precision are acceptable, and the design geometries are not as complex or demanding as in high-frequency or RF circuits. These tools are also more user-friendly, requiring less setup and computational overhead, making them accessible for a broader range of design projects.

Pattern matching, often considered a 2.5D extraction technique, helps by recognizing recurring layout patterns in the design. It uses pre-characterized parasitic values for specific geometric configurations to speed up the extraction process without performing complex calculations for each instance. Pattern matching provides a balance between speed and accuracy, making it suitable for large-scale designs that involve repetitive structures, such as standard cells or repeated circuit blocks.

Choosing an extraction tool

The decision between different parasitic extraction tools depends on the specific needs of the design. Field solver methods are ideal for specialized applications where accuracy cannot be compromised, such as in RF, microwave, and millimeter-wave designs, or in advanced nodes with dense and complex interconnect structures. Rule-based tools are the backbone of mainstream design flows, offering a practical and scalable solution for most digital and analog ICs. Pattern matching provides a flexible middle-ground solution, enhancing extraction efficiency for repetitive structures.

Designers must evaluate the performance, resource constraints and the complexity of their designs to choose the appropriate methodology. In many cases, a combination of different approaches may be used: field solvers for critical areas requiring high precision and rule-based methods for the bulk of the design, providing an optimal balance of efficiency and accuracy throughout the design process, and pattern matching to optimize efficiency in recurring design patterns.

There are tools, including Calibre xACT, that employ both rule-based and field solver approaches, plus offer pattern matching. For most designers, having a tool with high precision in extracting interconnect parasitics such as resistances and capacitances, are critical for understanding IC performance. An advanced extraction tool can capture detailed interactions between interconnects and devices within the IC, offering important insights for optimizing design performance and addressing signal integrity challenges (figure 3).

Figure 3. Inputs and outputs of a digital extraction flow.

Conclusion

Efficient parasitic extraction is vital for optimizing IC performance by accurately modeling resistances, capacitances and other parasitic elements. Designers have options when it comes to extraction tools, so should consider one that supports for both analog and digital design flows, can find and mitigate parasitic effects that impact signal integrity, timing closure and power efficiency and is qualified for all design nodes. Precise extraction results help designers make informed decisions early in the design process, ensuring robust and reliable IC development.

Mark Tawfik

Mark Tawfik is a product engineer in the Calibre Design Solutions division of Siemens Digital Industries Software, supporting the Calibre PERC and PEX reliability platform. His current work focuses on circuit reliability verification, parasitic extraction and packaged checks implementation. He holds a master’s degree from Grenoble Alpes University in Micro-electronics integration in Real-time Embedded Systems Engineering.

Also Read:

Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis

Jitter: The Overlooked PDN Quality Metric