100X800 Banner (1)

Agile Analog Update at #62DAC

Agile Analog Update at #62DAC
by Daniel Payne on 07-21-2025 at 10:00 am

agile analog min

On the last day of DAC 2025 I met with Chris Morrison, VP of Product Marketing at Agile Analog, to get an update. Their company provides Analog IP, the way you want it, and I knew that they had internal tools and a novel methodology to speed up the development process. This year they have started talking more about their internal IP automation tool, Composa.

Why use an analog IP automation tool?

Chris told me that there’s a list of challenges with conventional analog design: shortage of analog designers, too many processes and options, advanced nodes are difficult with new parasitics, and manual analog design is way too slow. Their answer was to address these challenges by using analog IP automation.

The approach is to combine Analog experts inside of the company along with SW developers to auto-generate schematics for IP. Their Composa tool works with OpenAccess, the API from the Si2 OpenAccess coalition. Composa users first define their requirements, like SNR, supply rails, bandwidth and other specifications. Then, there are a set of common analog building blocks, elements with their own characteristics that are combined to define the new IP. For example, an Analog to Digital Converter (ADC) needs a sample switch, input buffer and other blocks. These lower-level blocks are combined, a PDK is selected for a specific process, then the tools optimize the transistor W/L sizes using math equations.

A traditional approach to circuit sizing involves running lots of SPICE simulations, but Composa uses a much faster method of equation-based device sizing. With circuits that have feedback, then some SPICE runs could be used. The optimization process with Composa is not CPU intensive at all, typically requiring only a few minutes of CPU time to come up with the proper device sizes to meet your specifications. Full verification of the analog IP  is done with a traditional flow, including many Monte Carlo simulations. There’s little, or no, manual-tweaking of device sizes required to meet your specs.

With Composa the engineers at Agile Analog can get to the exact specs for an IP block in minutes, not days or weeks of manual efforts. Even changing to a different PDK will show new results in just a few minutes.

Customers of Agile Analog span a broad range of sectors and applications: Power Management ICs (PMIC), data converters, chip health and monitoring, PVT, IoT, defense, , security, anti-tamper IP, voltage glitching, clocking attacks, electromagnetic injection. Defense customers could be designing at 165nm or 130nm process nodes, datacom at 3nm, so Composa creates analog IP for quite a wide spectrum of processes.

Digital designers have used logic synthesis to retarget process nodes for decades  and this is now possible with analog design. If a customer wants a new oscillator, then Composa can be used to create a schematic and layout. Composa is an expert system  – it is repeatable, human understandable, and device sizing is not a probability problem.

Composa is a no-code system for users, its parameters are typed in a YAML script to configure what you want. Internally they just fill in the YAML to control each IP block generator. Composa has changed over time by expanding the element library, and verifying that it works across all PDKs, including some tuning for a new PDK. The Composa tool has created some 60 new IPs in the last 2 years..

Analog security IP is of special interest for the Agile Analog team as security has become a critical requirement for every SoC being developed. The company believes that it can offer differentiated anti-tamper solutions that are complementary to other providers of RoT (Root of Trust) and cryptographic engines, delivering value at the subsystem level with their security IP offerings. Another focus area is their data conversion IP solutions. They are working with a strategic customer to deploy their 12 bit ADC on the latest TSMC nodes.

Agile Analog is based in the UK, while Krishna Anne, the CEO is in the valley. 2025 has been another good year of revenue growth at the company. Visit their website for more product information. They have direct sales in US and Europe, with some distributors in Taiwan, Korea and China. Catch up with the Agile Analog team at the GlobalFoundries and TSMC events.

Summary

Analog IP is in high demand, but the older manual methods to hand-craft IP just take too long and require expert experience. Agile Analog has a different approach using their Composa tool to automate the IP creation process, with a library of analog building blocks. What used to take days or weeks of engineering effort now can be accomplished in minutes with this new methodology, significantly reducing the complexity, time and costs associated with traditional analog IP.

Related Blogs

 


Accelerating IC Design: Silvaco’s Jivaro Parasitic Reduction Tool

Accelerating IC Design: Silvaco’s Jivaro Parasitic Reduction Tool
by Daniel Nenni on 07-21-2025 at 8:00 am

Jivaro Reduction Control

In Silvaco’s July 2025 video presentation at the 62nd Design Automation Conference (DAC), Senior Staff Applications Engineer Tim Colton introduced Jivaro, a specialized parasitic reduction tool designed to tackle the escalating challenges of post-layout simulation in advanced IC designs. As semiconductor nodes shrink and designs grow more intricate—mirroring the AI chip complexity outlined in Synopsys’ guide parasitics explode, inflating simulation runtimes. Jivaro, in production for over 15 years, stands out as a standalone solution that accelerates simulations by 2x to 15x without sacrificing accuracy, fitting seamlessly between extraction and simulation stages.

Colton emphasized the tool’s core value: enhancing designer productivity amid tightening cycles. For instance, a simulation dropping from seven days to three enables multiple iterations daily, fostering innovation. This aligns with Synopsys’ “shift-left” methodology, where early optimization reduces risks. Jivaro’s process- and node-agnostic nature—operating on DSPF databases—ensures broad applicability, from FinFET nodes to high-speed analog blocks. It strips dummy devices in lower nodes, streamlining simulator workloads.

Key features include customizable reduction. Users categorize nets—power (aggressive reduction), intermediate (balanced), and critical (high accuracy)—tailoring accuracy to 1% on sensitive paths while aggressively pruning others. Multi-finger device merging connects parallel transistors textually, slashing device counts by up to 30% and runtime accordingly. In one example, a 10k-transistor 5nm analog block saw simulation time halve from five hours using embedded extractor reduction, then quartered to 1.25 hours with Jivaro’s selective settings. For larger designs, a 20-day run compressed to 10 days, enabling previously infeasible analyses.

Advanced capabilities like guarded parasitic reduction preserve signal integrity in high-accuracy scenarios. A recent enhancement, driven by customer feedback from a major cellphone producer, replaces MOS caps (three-pin transistors) with two-pin capacitors, yielding an additional 2x speedup on RF designs already optimized 3x. This underscores Jivaro’s evolution, supporting markets like displays, power devices, and AI-driven SoCs.

Real-world adoption validates its impact. Customers like Silicon Creations (PLLs), Etopus (PHYs), and Alpine leverage Jivaro for iterative, high-precision workflows. By offloading simulations to smaller machines, it optimizes resource use, addressing talent and cost constraints in the $383B AI chip market by 2032.

Ultimately, Jivaro exemplifies how targeted EDA tools master complexity. As AI workloads demand scalable silicon, solutions like this—echoing Synopsys’ holistic approach—ensure first-pass success, boosting coverage without accuracy trade-offs. Colton’s demo positions Silvaco as a vital ecosystem player, accelerating the journey from big ideas to breakthroughs.

About Silvaco Group, Inc.
Silvaco is a provider of TCAD, EDA software, and SIP solutions that enable semiconductor design and digital twin modeling through AI software and innovation. Silvaco’s solutions are used for semiconductor and photonics processes, devices, and systems development across display, power devices, automotive, memory, high performance compute, foundries, photonics, internet of things, and 5G/6G mobile markets for complex SoC design. Silvaco is headquartered in Santa Clara, California, and has a global presence with offices located in North America, Europe, Egypt, Brazil, China, Japan, Korea, Singapore, Vietnam, and Taiwan. Learn more at silvaco.com.

Also Read:

Silvaco’s Diffusion of Innovation: Ecosystem Investments Driving Semiconductor Advancements

Analysis and Exploration of Parasitic Effects

Silvaco: Navigating Growth and Transitions in Semiconductor Design


Protecting Sensitive Analog and RF Signals with Net Shielding

Protecting Sensitive Analog and RF Signals with Net Shielding
by Admin on 07-21-2025 at 6:00 am

fig1 net shielding 72dpi

By Hossam Sarhan

Communication has become the backbone of our modern world, driving the rapid growth of the integrated circuit (IC) industry, particularly in communication and automotive applications. These applications have increased the demand for high-performance analog and radio frequency (RF) designs.

However, designing analog and RF circuits can be quite challenging due to their sensitivity to various factors. Changes in layout design, operating conditions, and manufacturing processes can all have a significant impact on circuit performance. One of the major hurdles faced by analog designers is the issue of noise coupling between interconnects.

The proximity and interactions between different circuit elements can lead to signal noise, which can degrade the overall circuit performance. This is a critical concern, as analog and RF circuits are more susceptible to proximity effects, such as crosstalk and coupling noise, compared to their digital counterparts.

Mitigating noise coupling with net shielding

One of the widely used techniques to protect critical nets in analog and RF circuit designs is net shielding. This approach involves surrounding the sensitive signal nets with power or ground nets, which create a shielding effect that helps mitigate the impact of electromagnetic interference and crosstalk on the critical signal traces.

The power and ground nets, with their stable and low-noise characteristics, act as a barrier to isolate the critical signals from noise sources. This shielding helps maintain the integrity of the sensitive signals, preventing unwanted noise and disturbances. Figure 1 illustrates net shielding.

Figure 1: Net shielding methodology.

Additionally, the geometries belonging to the same net, when placed in close proximity to each other, can also act as a form of self-shielding. The proximity of the same-net traces creates a shielding effect, further protecting the critical signals from external interference.

By employing net shielding techniques, circuit designers can effectively safeguard the performance and reliability of analog and RF circuits, ensuring that the critical signals are isolated from noise sources and maintain their intended behavior.

Verifying net shielding effectiveness

Verifying the effectiveness of net shielding is not a straightforward task, as it requires tracing the critical net segments and checking the surrounding nets to confirm how much of the victim net is shielded. This process can be time-consuming and error-prone if done manually.

To address this challenge, designers can adopt an advanced reliability verification platform that provides comprehensive net shielding verification. A solution like Calibre PERC from Siemens EDA offers a packaged checks framework for net shielding verification that automates the verification process, streamlining the design validation workflow. This framework permits simple selection and configuration of pre-coded checks, maximizing ease-of-use and minimizing runtime setup. Calibre PERC packaged checks are provided as well dedicated checks to enhance the reliability of analog circuits.

The input for the packaged checks flow is a user configuration file with specified checks and their parameters. This input constraint file is processed by a package manager, which accesses the checks database and creates a rule file containing all of the selected checks, with the proper configuration parameters to run on the designated design. Figure 2 shows the net shielding setup using Calibre PERC packaged checks GUI.

Advance net shielding checks allows designers to specify the critical nets in their design and the minimum shielding percentage threshold required. The verification tool then automatically traces each critical net, analyzes the surrounding shielding nets and calculates the shielding percentage for each net.

The verification results can be viewed and cross-probed in the layout to help with debug.

By leveraging automated net shielding verification, designers can quickly and reliably validate the effectiveness of their net shielding implementation, ensuring that the critical signals are adequately protected from noise sources. This streamlined approach helps designers identify and address any net shielding issues, enhancing the overall reliability and performance of their analog and RF circuits.

The key benefits of using an advanced net shielding verification tool include:

  • Automated verification: The platform’s dedicated net shielding check eliminates the manual and error-prone process of tracing net segments and calculating shielding coverage, saving designers significant time and effort.
  • Streamlined integration: The platform’s packaged checks framework allows designers to easily integrate net shielding verification into their overall design validation flow, enabling them to combine multiple reliability checks into a single validation run.
  • Improved reliability: By quickly and reliably validating the effectiveness of net shielding implementation, the advanced platform helps designers identify and address any issues, ensuring the overall reliability and performance of their sensitive analog and RF circuits.

Conclusion

Protecting critical signals from noise coupling is a crucial aspect of successful analog and RF circuit design. Net shielding is a widely used technique that involves surrounding sensitive signal nets with power or ground nets to create a shielding effect, mitigating the impact of electromagnetic interference and crosstalk.

However, verifying the effectiveness of net shielding can be a challenging task. Fortunately, solutions exist. Designers can easily adopt an advanced reliability verification platform to provide automated and streamlined net shielding verification. With the right tools, designers can quickly identify and address any issues, ultimately enhancing the reliability and performance of their analog and RF designs. Packaged shielding net checks help designers deliver delivering high-quality products that meet the demanding requirements of today’s communication and automotive applications.

About the author:

Hossam Sarhan is a senior product engineer in the Design to Silicon division of Siemens Digital Industries Software, supporting the Calibre PERC reliability platform and Calibre PEX tools. His current work focuses on circuit reliability verification and inductance parasitics extraction. Prior to joining Siemens, he worked in modeling and design optimization for on-chip power management circuits. Hossam received his B.Sc. from Alexandria University, Egypt, his M.Sc. degree from Nile University, Egypt, and his Ph.D. from CEA-LETI, Grenoble, France.

Also Read:

Revolutionizing Simulation Turnaround: How Siemens’ SmartCompile Transforms SoC Verification

Siemens EDA Unveils Groundbreaking Tools to Simplify 3D IC Design and Analysis

Jitter: The Overlooked PDN Quality Metric


Executive Interview with Matthew Addley

Executive Interview with Matthew Addley
by Daniel Nenni on 07-20-2025 at 10:00 am

Matthew Addley SemiWiki Interview

Matthew Addley is an Industry Strategist at Infor, specializing in the global manufacturing sector. With over 30 years of experience in driving business transformation through technology, he aligns industry needs with Infor’s product strategy through thought leadership, customer engagement, and market insight. Beginning his career in the UK aerospace and defence industry, Matthew now spends much of his time in the Asia Pacific region operating from his home office in Sydney, Australia, bringing a global perspective across mature and emerging markets in ERP, manufacturing and supply chain excellence, and the increasing value of platform technologies.

What are the common supply chain and operational challenges you see among your customers?

Across industries and regions, a recurring theme we hear about is the difficulty of achieving true collaboration throughout the supply chain. Interestingly, the specific pain points can differ depending on where you’re located. For example, in the U.S., customers will often say, “Our suppliers aren’t collaborating with us,” while in Thailand, the sentiment is flipped: “Our customers aren’t collaborating with their suppliers.” The underlying issue (a breakdown in coordinated communication) is consistent, but perceptions of where the problem originates shift depending on regional context.

Another major challenge is the need to respond quickly and efficiently to change. The need for resilience and responsiveness has never been higher, as global supply chains continue to face geopolitical disruptions and lingering fragility from past events. As a result, organizations are under pressure to adapt rapidly to changes in demand, supply shortages, and pricing fluctuations.

At the operational level, one challenge that’s often overlooked, but at the same time is incredibly impactful, is onboarding new employees on the shop floor. We’re seeing a massive generational knowledge shift, where the people with deep knowledge of processes are retiring or moving on, and that knowledge is often left undocumented. It becomes extremely difficult to maintain production efficiency when newer workers are left to figure things out on their own. We deliver enterprise applications to bridge that gap by making processes more visible and repeatable, turning experience into data that everyone can use.

Infor’s How Possible Happens report found that while 75% of global companies surveyed expect 20%+ gains from technology but our evidence suggests that, without the focus on bulletproof processes, agility, and customer-centricity that our solutions provide, many fail to reach their objectives. We partner to help organizations better anticipate and adapt to supply chain disruptions, proving that visibility and agility are more than buzzwords. They’re measurable outcomes.

What specific challenges or use cases have you seen in the semiconductor industry, and how are you helping customers address them?

The semiconductor industry faces unique challenges related to supply chain fragility and component sourcing. One specific issue is ensuring the consistent quality of highly specialized parts across different suppliers. Historically, many manufacturers relied on a single supplier to meet the necessary minimum order quantities. But that approach is becoming increasingly risky.

We enable what we call “true dual sourcing,” which is the ability to proactively manage multiple suppliers for the same part, rather than just defaulting to the one that offers the right quantity. More importantly, we track and manage quality and other performance measures across suppliers so that if a company shifts from one supplier to another, they can establish and maintain confidence in quality. We also allow customers to allocate supply based on historical performance, essentially increasing resilience.

In addition, we track parts beyond just the generic descriptors of form, fit, and function. We capture the manufacturer’s part number, which gives far more granular insight and allows our customers to know whether a part can be used in a highly specific application or only in a generic context. That’s critical in semiconductor manufacturing and downstream activities, where a seemingly identical part from two different sources might not behave the same way. With our system, customers gain the visibility they need to make those nuanced decisions.

One semiconductor manufacturer cited in the How Possible Happens report saw a 40% reduction in time spent on quality-related supplier follow-ups after implementing Infor’s solution, which is a great example of how precise data and supplier insights drive better decision-making.

Where does your solution outperform your competitors?

Where Infor really shines is in operations, especially in areas like production, supply chain planning, and execution.

We often hear from our customers that “our operations are cleaner and better with your solution.” That’s because we’re built with manufacturing and supply chain complexity in mind, not just financial reporting. In fact, our financial modules are strong enough to support global operations, but they don’t need to be over-engineered because we reduce the amount of rework required. We’re able to capture accurate data at the point of production, which flows directly into financial processes, minimizing the need for reconciliation.

The challenge for us is that CFOs are sometimes comfortable with Infor’s competitors. One of our goals is to reassure them that we’re not trying to immediately overhaul everything, especially not their core financial systems. Instead, we often coexist with them initially, while bringing real-time, detailed operational visibility to the production floor. That’s where we outperform: in helping customers operate more efficiently day-to-day.

And customers are seeing the difference: 64% of Infor users report improved operational efficiency within 12 months of go-live, underscoring our ability to drive immediate, meaningful value where it matters most.

How do you ensure flexibility while maintaining a prescriptive product approach?

We take a prescriptive approach where it makes sense, but we know that not every customer fits into a single mold. That’s why we maintain a verticalized product management structure. When a customer comes to us with a unique need, we first ask: “Is this a one-off requirement, or is it something we’re hearing across the industry?” If it’s a common issue, we’ll prioritize building it into the product roadmap. If it’s a one-off, we offer customization through cloud extensibility.

One key advantage of our platform is that customizations don’t break during upgrades. In many legacy ERP systems, custom code can derail an entire upgrade process, forcing customers to rework configurations every 6–12 months. With Infor, upgrades are seamless because we offer a tailored experience without sacrificing agility or incurring high maintenance costs. This is especially important for companies that need to adapt quickly while remaining within budget.

How does your partner ecosystem support customer success across different segments, from SMBs to large enterprises?

Our partner ecosystem is one of our most important assets. We work with a range of partners, from regional experts and boutique consulting firms to global systems integrators like Deloitte. These partners help us deliver localized, industry-specific support to customers of all sizes.

Infor’s CloudSuite solutions play a central role in enabling this success. Built on a multi-tenant cloud architecture, CloudSuite gives businesses of all sizes the ability to scale quickly, respond to market changes with agility, and gain real-time visibility into operations across the enterprise. Our partners are trained to leverage these capabilities to help customers drive faster time-to-value, reduce IT complexity, and improve transparency across the board.

For mid-market and enterprise clients, particularly in multi-tier manufacturing or semiconductor settings, we often operate in a “two-tier” ERP model: running on the shop floor while headquarters uses a different enterprise system. In these cases, our partners help ensure seamless data flow and coordination between the two systems.

For SMBs, our partners play a critical role in delivering fast, cost-effective implementations. These customers often don’t have large IT teams, so our partners step in as both implementers and ongoing advisors, sometimes even serving as virtual CIOs or COOs. The goal is to meet customers where they are and provide the right level of support based on their size, industry, and growth trajectory. And it’s working, with 79% of Infor customers saying that moving to CloudSuite helped them scale more quickly and respond to business changes with greater agility.

What is your approach to incorporating new technologies like AI and machine learning?

We don’t believe in handing customers a generic AI toolkit and saying, “Go figure it out.” Instead, we’re focused on delivering purpose-built, scenario-driven AI solutions that solve specific, tangible problems.

Take contract analysis in the electronics industry, for example. Service terms in these contracts are critical and comparing them manually is time consuming and error-prone. We’re using generative AI to help partners instantly analyze and compare service terms across contracts. This drastically reduces the time and effort required to make informed decisions, particularly in fast-moving environments where speed and accuracy are essential.

Infor Velocity Suite plays a key role in how we enable rapid, value-driven innovation. It provides a foundation of pre-built, industry-specific accelerators and extensible AI capabilities that help customers deploy and scale technology quickly without needing to start from scratch. With Velocity, we’re able to deliver advanced features like AI-driven supply chain planning, inventory optimization, and predictive maintenance in a way that’s tailored to each customer’s industry context.

We always prioritize practical value over hype. We’re not here to sell AI for AI’s sake. We’re here to make it work for our customers—in ways they can deploy today and see results from tomorrow.

Also Read:

CEO Interview with Shelly Henry of MooresLabAI

CEO Interview with Dr. Maksym Plakhotnyuk of ATLANT 3D

CEO Interview with Carlos Pardo of KD


CEO Interview with Jonathan Reeves of CSignum

CEO Interview with Jonathan Reeves of CSignum
by Daniel Nenni on 07-20-2025 at 8:00 am

image015

For more than 30 years, Jonathan has successfully led many start-up ventures, including multiple acquisitions as well as senior operating roles in networking, cloud computing, cybersecurity, and AI businesses.

He co-founded Arvizio, a provider of enterprise AR solutions, was Chairman and co-founder of CloudLink Technologies, which today is part of Dell. He also founded and served as CEO of several networking companies including Sirocco Systems and Sahara Networks.

Tell us about your company?

CSignum’s patented wireless platform is revolutionizing underwater and underground communications by overcoming the limitations of traditional radio, acoustic, and optical systems, unlocking new possibilities for IoT connectivity below the surface.

The company’s flagship product EM-2 product line enables real-time, wireless data transmission from submerged or buried sensors to a nearby surface gateway through many challenging media, including water, ice, soil, rock, and concrete.

The solutions integrate with industry-standard sensors, enabling rapid application deployment, low-maintenance, without the need for surface buoys, pedestals or cables which can clutter natural environments.

What problems are you solving?

CSignum addresses a fundamental connectivity gap by linking data from sensors in submerged and subsurface locations in challenging conditions quickly and easily to the desktop for monitoring and analysis, eliminating the blind spots in critical infrastructure and services.

This opens transformative possibilities for smarter infrastructure, safer operations, and better environmental outcomes on a global scale.

What application areas are your strongest?

CSignum’s strongest application areas are those where reliable, real-time data is needed from environments traditionally considered too difficult or costly to monitor:

  • Water Quality Monitoring: For rivers, lakes, reservoirs, and combined sewer overflows (CSOs), support compliance with evolving environmental regulations.
  • Corrosion Monitoring: For buried pipelines, storage tanks, marine structures, and offshore energy platforms, where monitoring is critical for safety and asset longevity.
  • Under-Vessel Monitoring: Including propeller shaft bearing wear, hull integrity, and propulsion system health for commercial and naval fleets—without dry-docking or through-hull cabling.
  • Urban Infrastructure: Monitoring storm drains, culverts, and wastewater systems in confined spaces.
  • Offshore Wind and Energy: Supporting environmental, structural, and subsea equipment monitoring on and around offshore wind turbines and platforms.
What keeps your customers up at night?

From public water systems and offshore platforms to shipping fleets and underground utilities, our customers are responsible for critical infrastructure. They worry about the impact of not knowing what’s happening below the surface:

  • Missed or delayed detection of environmental incidents, such as sewer overflows, leaks, or pollution events that could lead to regulatory penalties, reputational damage, or public health risks.
  • Undetected equipment degradation, especially corrosion or mechanical wear, that can result in costly failures, downtime, or safety hazards.
  • Gaps in real-time data from buried or submerged infrastructure due to the limits of traditional wireless or cabled systems, particularly in hard-to-access locations.
  • Compliance pressures, especially as governments introduce stricter real-time monitoring and reporting requirements in water, energy, and maritime sectors.
  • Resource constraints: accessing reliable, high-frequency data without adding personnel, vehicles, or costly construction projects.
What does the competitive landscape look like and how do you differentiate?

CSignum is the world’s first commercially viable platform that successfully transmits data through water, ice, soil, and other signal-blocking media, simplifying real-time data collection from the most inaccessible and hazardous locations, reducing risk and cost. No other solution currently achieves this.

The innovation and differentiation lie not just in the core technology but in the range of applications it unlocks: water quality monitoring, corrosion detection in submerged pipelines, tracking structural health of marine infrastructure, and enabling communications in ice-covered or disaster-prone environments.

What new features/technology are you working on?

CSignum is scaling its platform for widespread adoption across water and other utilities, maritime, energy infrastructure, defense, and environmental monitoring, especially through partnerships.

One area of expansion includes under-vessel systems monitoring, where CSignum’s technology enables wireless measurement of propeller shaft bearing wear and propulsion system health, all without the need for through-hull cabling or dry dock access.

In parallel, we will expand our EM-2 product family, launching next-gen models with longer battery life, smaller form factor, enhanced analytics, and plug-and-play compatibility with leading sensor systems. The CSignum Cloud platform will evolve into a hub for predictive diagnostics, anomaly detection, and digital twin integration.

How do customers normally engage with your company?

We work closely with customers to understand the physical constraints, data requirements, and operational goals of their environment.

From there, we guide them through a proof-of-concept or pilot deployment, leveraging our modular EM-2 systems and integrating with their existing sensors or preferred platforms.

Customers value our deep technical support, application expertise, and the flexibility of a platform that requires no cabling, no trenching, and minimal site disruption.

Contact CSigmun

Also Read:

CEO Interview with Yannick Bedin of Eumetrys

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog


Closing the Stochastics Resolution Gap

Closing the Stochastics Resolution Gap
by Admin on 07-20-2025 at 6:00 am

Closing the Stochastics Resolution Gap

The relentless miniaturization of semiconductor devices has always relied on achieving ever-smaller features on silicon wafers. However, as the industry enters the realm of extreme ultraviolet (EUV) lithography, it faces a critical barrier: stochastics, or the inherent randomness in patterning at atomic scales. This phenomenon introduces variability that jeopardizes yields, reliability, and overall device performance, particularly as feature sizes shrink to the limits of EUV capabilities.

Understanding the Stochastics Challenge

Stochastics manifest in several detrimental forms: line-edge roughness (LER), linewidth roughness (LWR), local critical dimension uniformity (LCDU), and local pattern placement error (LPPE). These lead to edge placement errors (EPE) and, ultimately, stochastic defects—such as missing or bridging features—that can render entire chips defective. These issues, once minor when features were large, have grown in significance as feature sizes drop to sub-20 nm scales. For instance, a 2 nm LER on a 200 nm feature was negligible; on a 40 nm feature, it becomes critical.

The Stochastics Resolution Gap

A key insight introduced in the white paper is the Stochastics Resolution Gap: the difference between the resolution achievable in research labs versus what is viable in high-volume manufacturing (HVM). While 193i immersion lithography has largely closed this gap, EUV has not. Despite its theoretical capability to resolve features below 12 nm half-pitch, stochastic defects limit HVM production to ~16–18 nm. This ~4–6 nm resolution shortfall directly impacts chip area and cost, limiting the economic returns of EUV lithography.

Strategies to Lower the Stochastic Limit

To bridge this gap, the industry must lower the stochastic limit using various approaches:

  1. Increase Exposure Dose: Raising photon counts reduces photon shot noise, but at the expense of throughput and cost. Since EUV photons are high energy and scarce, doubling the dose reduces variability only modestly while halving tool productivity—a costly tradeoff.

  2. Resist Improvements: New metal-oxide resists (MORs) improve EUV absorption and reduce stochastic variation. MORs have shown promising results and are entering production, though optimization continues.

  3. Etching Techniques: Innovations in atomic layer etching and deposition offer opportunities to smooth patterns and control dimensions post-exposure. These techniques can lower LER and improve LCDU, but their benefits vary by design.

  4. Stochastics-Aware Design and OPC: Modern design rules now factor in stochastic variability, especially for critical layers. Likewise, optical proximity correction (OPC) must be calibrated using stochastic-aware models to prevent failure-prone “hot spots.”

  5. Stochastics-Aware Process Control: Advanced metrology tools are needed to separate global and local variations and provide real-time control. Statistical process control (SPC) and advanced process control (APC) can be improved by using accurate measurements of stochastic effects.

The Role of Accurate Metrology

At the core of all these improvements lies measurement accuracy. Traditional scanning electron microscope (SEM) metrology often overestimates variability due to image noise, leading to biased and misleading results. Fractilia’s approach removes SEM-induced noise from the analysis, offering unbiased, high-precision metrology that reflects true wafer variability. This is essential for optimizing resist selection, refining etch recipes, generating design rules, calibrating OPC models, and improving process control.

Bottom Line

Closing the Stochastics Resolution Gap is critical to sustaining Moore’s Law and maintaining economic viability at advanced nodes. This requires coordinated advances in materials, process technologies, design practices, and—most importantly—stochastic metrology. By enabling accurate, real-world measurements, companies can better manage variability, improve yields, and accelerate ramp to production. Fractilia’s tools and methodologies represent a foundational step in enabling the next generation of semiconductor manufacturing.

The full white paper is available here.


Podcast EP298: How Hailo is Bringing Generative AI to the Edge with Avi Baum

Podcast EP298: How Hailo is Bringing Generative AI to the Edge with Avi Baum
by Daniel Nenni on 07-18-2025 at 10:00 am

Dan is joined by Avi Baum, Chief Technology Officer and Co-Founder of Hailo, an AI-focused chipmaker that develops specialized AI processors for enabling data-center-class performance on edge devices. Avi has over 17 years of experience in system engineering, signal processing, algorithms, and telecommunications while focusing on wireless communication technologies for the past 10 years.

Dan explores the breakthrough AI processors Hailo is developing. These devices enable high performance deep learning applications on edge devices. Hailo processors are geared toward the new era of generative AI on the edge. Avi describes the impact generative AI on the edge can have by enabling perception and video enhancement through Hailo’s wide range of AI accelerators and vision processors. He discusses how security and privacy can be enhanced with these capabilities as well as the overall impact on major markets such as automotive, smart home and telecom.

Contact Halio

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Shelly Henry of Moores Lab (AI)

CEO Interview with Shelly Henry of Moores Lab (AI)
by Daniel Nenni on 07-18-2025 at 6:00 am

image001 (3)

Shelly Henry is the CEO and Co-Founder of MooresLabAI, bringing over 25 years of semiconductor industry experience. Prior to founding MooresLabAI, Shelly led silicon teams at Microsoft and ARM, successfully delivering chips powering billions of devices worldwide. Passionate about driving efficiency and innovation, Shelly and his team at MooresLabAI are transforming chip development through specialized AI-driven automation solutions.

Tell us about your company.

MooresLabAI, founded in 2025, is transforming semiconductor development using specialized AI automation. With our platform, chip design teams can accelerate their schedules by up to 7x and cut pre-fabrication costs by 86%. We integrate seamlessly into existing workflows, helping semiconductor companies rapidly deliver reliable silicon.

What problems are you solving?

Semiconductor design is notoriously expensive and slow – verification alone can cost tens of millions and take months of engineering effort. Our VerifAgent™ AI platform automates and dramatically accelerates these verification processes, reducing human error and addressing the critical talent shortage facing the industry.

What application areas are your strongest?

Our strongest traction is with companies designing custom AI, automotive, and mobile chips. Our early adopters include major NPU providers and mobile chipset developers who are already seeing impressive productivity gains and significant reductions in costly errors.

What keeps your customers up at night?

They worry about verification delays, costly re-tapeouts, and stretched engineering resources. With MooresLabAI, our customers experience significantly faster verification cycles, fewer late-stage bugs, and can do more with existing resources, easing these critical pain points.

What does the competitive landscape look like and how do you differentiate?

Many current AI tools provide general assistance but are not built specifically for semiconductor workflows. MooresLabAI uniquely offers end-to-end, prompt-free automation designed explicitly for silicon engineering. We seamlessly integrate with all major EDA platforms and offer secure, flexible deployment options, including on-premises solutions.

What new features/technology are you working on?

We are expanding beyond verification to offer complete end-to-end chip development automation—from architecture and synthesis to backend physical design, firmware generation, and full SoC integration. Our modular AI-driven platform aims to cover the entire silicon lifecycle comprehensively.

How do customers normally engage with your company?

Customers typically start with our pilot programs, which clearly demonstrate value with minimal initial effort. Successful pilots transition smoothly into subscription-based engagements, with flexible licensing options tailored to customer needs. For those hesitant about immediate adoption, we also offer verification services to quickly address specific project needs.

Contact MooresLabAI

Also Read:

CEO Interview with Dr. Maksym Plakhotnyuk of ATLANT 3D

CEO Interview with Carlos Pardo of KD

CEO Interview with Darin Davis of SILICET

CEO Interview with Peter L. Levin of Amida


New Cooling Strategies for Future Computing

New Cooling Strategies for Future Computing
by Daniel Payne on 07-17-2025 at 10:00 am

thermal panel dac min

Power densities on chips increased from 50-100 W/cm2 in 2010 to 200 W/cm2 in 2020, creating a significant challenge in removing and spreading heat to ensure reliable chip operation. The DAC 2025 panel discussion on new cooling strategies for future computing featured experts from NVIDIA Research, Cadence, ESL/EPFL, the University of Virginia, and Stanford University. I’ll condense the 90-minute discussion into a blog.

Four techniques to mitigate thermal challenges were introduced:

  • Circuit Design, Floor Planning, Place and Route
  • Liquid or Cryogenic Cooling
  • System-level, Architectural
  • New cooling structures and materials

For circuit design, there are temperature-aware floor planning tools, PDN optimization, temperature-aware TSVs, and the use of 2.5D chiplets. Cooling has been done with single-phase cold plates, inter-layer liquid cooling, 2-phase cooling, and 150 – 70k cooling. System-level approaches are advanced task mapping, interleaving memory and compute blocks, and temperature-aware power optimization. The new cooling structures and materials involve diamonds, copper nanomesh, and even phase-change materials.

John Wilson from NVIDIA talked about a 1,000X increase in single chip AI performance in FLOPS over just 8 years, going from the Pascal series to the Blackwell series. Thermal design power has gone from 106W in 2010 to 1,200W in 2024. Data centers using Blackwell GPUs use liquid cooling to attain a power usage efficiency (PUE) of 1.15 to 1.2, providing a 2X reduction in overhead power. At the chip-level, small hotspots cause heat to spread quickly, while heat spreads slowly for larger hotspots. GPU temperatures depend on the silicon carrier thickness and the type of stacking. Stack-up materials such as diamond and silicon carbide impacted thermal characteristics.

A future cooling solution is using silicon microchannel cold plates.

Jamil Kawa said that the energy needs for AI-driven big data compute farm already exceeds our projected power generation capacity through 2035 to the point that Microsoft revived the nuclear reactor at 3-miles Island for their compute farms / Data Center energy needs. It is not a sustainable path . A lower energy consumption per instruction (or per switching bit) is needed. Cold computing provides this answer even after all cooling costs are taken into account. There are alternative technologies that are very energy efficient at cryo temperatures such as Josephson Junction based superconducting electronics operated at < 4K, but they have major limitations, the main part being area per function which is greater than 1000 that of CMOS. Therefore, CMOS technology operated in a liquid nitrogen environment with an operating range of 77K to 150K is the answer. The cooling costs of using liquid nitrogen are offset by the dramatically lower total power to be dissipated at an iso-performance. Operating CMOS in that range allows operation at a much lower VDD (power supply) for iso-performance that is generating much less heat to dissipate,

David Atienza, a professor at EPFL, talked about quantum computing, superconducting, and HPC challenges. He said the temperatures for superconducting circuits used in quantum computing are in the low Kelvin range. Further, for an HPC chip to be feasible, dark silicon is required to reduce power. Microsoft plans to restart the Three Mile Island power plant to power its AI data center. Liquid nitrogen can be used to lower the temperature and increase the speed of CMOS circuits. Some cold CMOS circuits can run at just 350 mV for VDD to manage power.

Albert Zeng, Sr. Software Engineering Group Director at Cadence, said they have thermal simulation software for the whole stack, starting with Celsius Thermal Solver used on chips, all the way up to the Reality Digital Twin Platform. Thermal analysis started in EDA with PCB and packages and is now extending into 3D chips. In addition to thermal for 3D, there are issues with stress and the impact of thermal on timing and power, which require a multi-physics approach.

Entering any data center requires wearing headphones to protect against the loud noise of the cooling system fans. A system analysis for cooling capacity is necessary, as the GP200-based data centers require liquid cooling, and the AI workloads are only increasing over time.

Mircea Stan, a professor from VMEC, presented ideas on limiting heat generation and efficiently removing heat. The 3D-IC approach creates limitations on power delivery and thermal walls. Voltage stacking can be used to break the power delivery wall, and microfluidic cooling will help break the thermal wall.

VMEC has created an EDA tool called Hotspot 7.0 that models and simulates a die thermal circuit, a microfluidic thermal circuit, and a pressure circuit.

Srabanti Chowdhury from Stanford University talked about maximizing device-level efficiency that scales to the system level. Initially, heat sinks and package fins were used for 2D processors to manage thermal issues. An ideal thermal solution would spread heat within nm of any hotspot, spreading heat both laterally and vertically, and integrate with the existing semiconductor processing materials. Their research has shown that using diamond 3D thermal scaffolding is a viable technique for 3D-IC thermal management.

Stanford researchers have been developing this diamond thermal dielectric since 2016 and are currently in the proof-of-concept phase.

Q&A

Q:  What about security and thermal issues?

A: Jamil – Yes, thermal hacking is a security issue, and they’re using schemes to read secret data, so there are side channel mitigation techniques.

Q: Is there a winning thermal technology?

Liquid cooling on the edge is coming, but the overhead of microfluidic needs to be beneficial.

Q: Can we do thermal management with AI engines?

Albert – We can use AI models to help with early design power estimates and thermal of ICs. The data centers are designed with models of the servers, where AI models are used to control the workloads.

Q: Can we convert heat into something useful on the chip?

A: Heat energy cannot be extracted into something useful, sorry, the conversion is not practical, because we cannot convert heat into electricity without increasing the heat even higher.

Q: What about keeping the temperature more constant?

Srabanti – The workloads are too variable to moderate the temperature swings. A change in materials to moderate heat is more viable.

Q: What is the future need for thermal management?

A: Jamil – Our energy needs today already exceed our projected energy generation capacity and therefore the solution is to generate much less energy at a particular performance level by operating at much lower power supply and lower temperature.  A study done on a liquid nitrogen cooled GPU at 77K had cooling costs on par with forced cooled air but with a 17% performance advantage.

Q: What about global warming?

A: Liquid nitrogen is at 77K, a sweet spot to use. Placing a GPU in liquid nitrogen vs forced air has about the same cost, but with improved performance.

Q: For PUE metrics, what about the efficiency of servers per workload?

Albert – PUE should be measured at full workload capacity.

Q: Have you tried anything cryogenic?

John – Not at NVIDIA yet.

Related Blogs


DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring
by Mike Gianfagna on 07-17-2025 at 6:00 am

DAC News – proteanTecs Unlocks AI Hardware Growth with Runtime Monitoring

As AI models grow exponentially, the infrastructure supporting them is struggling under the pressure. At DAC, one company stood out with a solution that doesn’t just monitor chips, it empowers them to adapt in real time to these new workload requirements.

Unlike traditional telemetry or post-silicon debug tools, proteanTecs embeds intelligent agents directly into the chip, enabling real-time, workload-aware insights that drive adaptive optimization. Let’s examine how proteanTecs unlocks AI hardware scaling with runtime monitoring.

What’s the Problem?

proteanTecs recently published a very useful white paper on the topic of how to scale AI hardware. The first paragraph of that piece is the perfect problem statement. It is appropriately ominous.

The shift to GenAI has outpaced the infrastructure it runs on. What were once rare exceptions are now daily operations: high model complexity, non-stop inference demand, and intolerable cost structures. The numbers are no longer abstract. They’re a warning.

Here are a few statistics that should get your attention:

  • Training a model like GPT-4 (Generative Pre-trained Transformer) reportedly consumed 25,000 GPUs over nearly 100 days, with costs reaching $100 million. GPT-5 is expected to break the $1 billion mark
  • Training GPT-4 drew an estimated 50 GWh, enough to power over 23,000 U.S. homes for a year. Even with all that investment, reliability is fragile. A 16,384-GPU run experienced hardware failures every three hours, posing a threat to the integrity of weeks-long workloads
  • Inference isn’t easier. ChatGPT now serves more than one billion queries daily, with operational costs nearing $700K per day.

The innovation delivered by advanced GenAI applications can change the planet, if it doesn’t destroy it (or bankrupt it) first.

What Can Be Done?

Uzi Baruch

During my travels at DAC, I was fortunate to spend some time talking about all this with Uzi Baruch, chief strategy officer at proteanTecs. Uzi has over twenty years of software and semiconductor development and business leadership experience, managing R&D and product teams and high scale projects at leading, global high technology companies. He provided a well-focused discussion about a practical and scalable approach to tame these difficult problems.

Uzi began with a simple observation. The typical method to optimize a chip design is to characterize it across all operating conditions and workloads and then develop design margins to keep power and performance in the desired range. This approach can work well for chips that operate in a well characterized, predictable envelope. The issue is that AI, and in particular generative AI applications are not predictable.

Once deployed, the workload profile can vary immensely based on the scenarios encountered. And that dramatically changes power and performance profiles while creating big swings in parameters such as latency and data throughput. Getting it all right a priori is like reliably predicting the future, a much sought after skill that has eluded the finest minds in history.

He went on to point out that the problem isn’t just for the inference itself. The training process faces similar challenges. In this case, wild swings in performance and power demands can cause failures in the process and wasteful energy consumption. If not found, these issues manifest as unreliable, inefficient operation in the field.

Uzi went on to discuss the unique approach proteanTecs has taken to address these very real and growing problems. He described the use of technology that delivers workload-aware real-time monitoring on chip. Thanks to very small, highly efficient on-chip agents, parametric measurements – in-situ and in functional mode – are possible. The system detects timing issues, operational and environmental effects, aging and application stress. Among the suite of Agents are the Margin Agents that monitor timing margins of millions of real paths for more informed decisions. And all of this is tied to the actual instructions being executed by the running workloads.

The proteanTecs solution monitors the actual conditions the chip is experiencing from the current workload profile, analyzes it and reacts to it to optimize the reliability, power and performance profile. All in real time. No more predicting the future but rather monitoring and reacting to the present workload.

A reasonable question here is what is the overhead of such a system? I asked Uzi and he explained that area overhead is negligible as the monitors are very small and can typically be added in the white space of the chip. The gate count overhead is about 1 – 1.5 percent, but the power reduction can be 8 – 14 percent. The math definitely works.

I came away from my discussion with Uzi believing that I had seen the future of AI, and it was brighter than I expected.

At the proteanTecs Booth

Noam Brousard

While visiting the proteanTecs booth at DAC I had the opportunity to attend a presentation by Noam Brousard, VP of Solutions Engineering at proteanTecs. Noam has been with the company for over 7 years and has a rich background in systems engineering for over 25 years at companies such as Intel and ECI Telecom.

Noam provided a broad overview of the challenges presented by AI and the unique capabilities proteanTecs offers to address those challenges. Here are a couple of highlights.

He discussed the progression from generative AI to artificial general intelligence to something called artificial superintelligence. These metrics compare AI performance to that of humans. He provided a chart shown below that illustrates the accelerating performance of AI across many activities. When the curve crosses zero, AI outperforms humans. Noam pointed out that there will be many more such events in the coming months and years. AI is poised to do a lot more, if we can deliver these capabilities in a cost and power efficient way.

Helping to address this problem is the main focus of proteanTecs. Noam went on to provide a very useful overview of how proteanTecs combines its on-chip agents with embedded software to deliver complete solutions to many challenging chip operational issues. The figure below summarizes what he discussed.  As you can see, proteanTecs solutions cover a lot of ground that includes dynamic voltage scaling with a safety net, performance and health monitoring, adaptive frequency scaling, and continuous performance monitoring. It’s important to point out these applications aren’t assisting with design margin strategy but rather they are monitoring and reacting to real-time chip behavior.

About the White Paper

There is now a very informative white paper available from proteanTecs on the challenges of AI and substantial details about how the company is addressing those challenges. If you work with AI, this is a must-read item. Here are the topics covered:

  • The Unforgiving Reality of Scaling Cloud AI
  • Mastering the GenAI Arms Race: Why Node Upgrades Aren’ Enough
  • Critical Optimization Factors for GenAI Chipmakers
  • Maximizing Performance, Power, and Reliability Gains with Workload-Aware Monitoring On-Chip
  • proteanTecs Real-Time Monitoring for Scalable GenAI Chips
  • proteanTecs AVS Pro™ – Dominating PPW Through Safer Voltage Scaling
  • proteanTecs RTHM™ – Flagging Cluster Risks Before Failure
  • proteanTecs AFS Pro™ – Capturing Frequency Headroom for Higher FLOPS
  • System-Wide Workload and Operational Monitoring
  • Conclusion

To Learn More

You can get your copy of the must-read white paper here: Scaling GenAI Training and Inference Chips with Runtime Monitoring. The company also issued a press release recently that summarizes its activities in this important area here.  And if all this gets your attention, you can request a demo here. And that’s how proteanTecs unlocks AI hardware growth with something called runtime monitoring.

Also Read:

Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs

proteanTecs at the 2025 Design Automation Conference #62DAC

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing