Bronco Webinar 800x100 1

The Name Changes but the Vision Remains the Same – ESD Alliance Through the Years

The Name Changes but the Vision Remains the Same – ESD Alliance Through the Years
by Mike Gianfagna on 02-23-2026 at 6:00 am

The Name Changes but the Vision Remains the Same – ESD Alliance Through the Years

The Electronic System Design Alliance (ESDA) has been at the center of the EDA industry through its many changes over the years. It occurred to me that an update on this organization would be useful. ESDA is a technology community within SEMI and is managed primarily by a team of three who coordinate all its activities along with a group of dedicated volunteers from member companies. I had the good fortune to spend some time with two of those core team members recently.

We had a great time talking about the phases of ESDA’s story and the achievements that were made along the way. I realized this story needed telling. Those EDA veterans among us will know some, but maybe not all of it. And those who are new to EDA should know how the industry got here. The story has many twists and turns, but one simple fact rang out during our discussion about ESDA. The name changes but the vision remains the same. Let’s look at ESD Alliance through the years.

Who’s Talking?

Bob Smith

The two gentlemen I spoke with were Bob Smith, Executive Director of ESD Alliance and Paul Cohen, Sr. Manager ESDA & R&D. I’ve known both Bob and Paul for a long time and I’ve had the pleasure of attending many of their events. I even helped to create a few of them. As we go through the story I’ll add some of my experiences as well. A couple of quick bios are appropriate:

Before ESDA, Bob was senior vice president of marketing and business development at Uniquify. Bob began his career as an analog design engineer working at Hewlett Packard. Since then, he has spent more than 30 years in various marketing, business development and executive management roles primarily working with startups and early-stage companies. These companies include IKOS Systems, Synopsys, LogicVision, and Magma Design Automation. He was a member of the IPO teams that took Synopsys public in 1992 and Magma public in 2001.

Paul Cohen

Paul Cohen has been with the various incarnations of ESDA for over 18 years. He is indeed the class historian. Prior to ESDA, Paul had a long career in semiconductor design/applications and EDA at companies such as Virage Logic, Fujitsu Microelectronics, Design Acceleration Inc, IDT, Prime Computer, and Digital Equipment Corporation. Paul began his career at General Electric.

The Beginning – EDAC

The story begins in 1989 when a group of EDA companies formed the group, Electronic Design Automation Companies, EDAC, to negotiate with the increasingly important trade show portion of the Design Automation Conference. DAC had been a prestigious technical conference for many years, dating all the way back to 1964. Momentum for commercial exhibits at the event began in the mid-1970s, and by the early 1980s, the trade show became a core element. The first commercial DAC was held in June 1984.

The EDA Companies became the EDA Consortium in 1996, continuing its role working with DAC while addressing additional industry-wide issues. The organization incorporated and became a co-sponsor of DAC in 1992, alongside the IEEE and ACM.

Many other important events occurred during this time that shaped the future of the EDA industry. One was the establishment of the Phil Kaufman Award, the highest recognition in the EDA industry. A Nobel Prize for EDA of sorts. The award took its name from Phil Kaufman, a pioneer in EDA who passed away in 1992. The IEEE Council on Electronic Design Automation (CEDA) became a co-sponsor in 2005.

The first Kaufman award was given to Dr. Hermann Gummel in 1994. Gummel was a researcher at AT&T Bell Laboratories. He was recognized for his many fundamental contributions to central areas in EDA, including the integral charge control model for bipolar junction transistors that bears his name, the Gummel-Poon model. I knew both Hermann Gummel and Sam Poon – incredibly smart people.

Over the years, the coveted Kaufman Award has recognized some truly great pioneers. You should check out the all-star list here

In 2021, ESDA and CEDA created the Phil Kaufman Hall of Fame to posthumously recognize individuals who made significant contributions through creativity, entrepreneurism and innovation to the electronic system design industry and were not recipients of the Phil Kaufman Award.

Jim Hogan, executive, angel investor and board member, and Stanford University Professor Edward J. McCluskey were the first honorees.

In 1994, EDAC published the first Market Statistics Service (MSS) report (now the Electronic Design Market Data – EDMD – report). The report included detailed revenue that was reported in confidence by public and private EDA, IP, and services companies, allowing companies, investors and analysts to track trends in the industry. Over the years, these reports have tracked the substantial growth of the EDA industry. Walden C. Rhines is the Executive Sponsor of the SEMI Electronic Design Market Data report and has been from the start. You can hear the latest results of these reports on the Semiconductor Insiders Podcast Series on SemiWiki.  Here is the most current report.

Also, during this first phase in 2009 the organization coined the term EDA, Where Electronics Begins. You can see the associated logo at the top of this post. The catch phrase was accompanied by an informative video. The forward-looking vision conveyed by this work has stood the test of time. Today, it’s as relevant as ever and the organization continues to promote these ideas. This enduring vision was the catalyst for this post’s title. It essentially named itself.

I will offer one more story from this era. In 2013, EDAC put together a substantial fund-raising event called EDA: Back to the Future. It was billed as an “industry reunion”, but the primary focus was fund raising to ensure EDA had its proper place in history. The event was held at the Computer History Museum in Mountain View.

I was one of several folks who worked on the production of the event. There was a live auction and a silent auction, and a significant amount of money was raised to help document EDA’s contributions to the development of computing technology.

One last piece to share on this one. I was at eSilicon at the time, and we were the sponsor of an American Le Mans racing team called The Flying Lizards.  We donated a ViP pit crew pass for the upcoming race at Laguna Seca for the live auction. Someone everyone knows at SemiWiki was the proud winner of that auction lot, Dan Nenni. 

Expanding the Footprint – ESDA

In 2016, The Electronic Design Automation Consortium became the Electronic System Design Alliance. This name change supported a significant shift in the EDA industry. EDA tools were now being used for more than just chip design. The realization was occurring that collections of chips were becoming the backbone of new systems, and the need to expand EDA technology into that realm became important.

We saw a gradual shift in focus that went beyond the boundary of a single piece of silicon. EDA was becoming more widespread, and this spawned another wave of growth. Electronic system-level design became a thing.

Beyond this shift in design focus, there was another fundamental change occurring. EDA was no longer just about design tools. Semiconductor IP was gaining significant momentum as an enabler to build new systems more quickly and reliably. During our discussion, Bob Smith described an encounter at a Board Meeting with Simon Seagers. Simon was the CEO of Arm at the time and a member of the Board. Simon was lamenting the fact that he was part of an electronic design automation consortium, but he didn’t provide EDA tools.

IP was quickly becoming a substantial piece of the EDA market. Bob explained that Simon’s comments were taken seriously and that helped to move the organization toward a new and broader identity.

A Seat at a Bigger Table – SEMI

In 2018, the ESD Alliance became a SEMI technology community. The organization was now part of a global entity that brings together more than 3,500 member companies to make a difference on top industry issues for the microelectronics industry. Bob Smith described this phase as “the changing of the guard”.

Back to that enduring tagline, EDA, Where Electronics Begins. The team that invented that vision is now part of a substantial organization that is helping the larger audience of EDA users. On the surface, this seems like a perfect fit. In my conversation with Bob and Paul, I discovered something that is obvious if you think about it a bit.

Change is difficult and takes time. As part of the shift to SEMI, ESDA began to co-locate chip design events with the popular SEMICON expositions that SEMI held. This seems perfect – one event where the entire spectrum of design and manufacturing for semiconductors could be explored.

What Bob and Paul observed in the early days of these co-located events was very little “cross-over” behavior. Those who focused on manufacturing went to that part of SEMICON and those that focused on design went to ESDA’s event. This fact reflects the significant shift that is still occurring today to bring design and manufacturing together into one focus. Complex system designs, fueled by AI workload demands is driving it. So is heterogeneous multi-chip design. This discipline is driven significantly by material innovation in packaging, which is tied closely to manufacturing.

Bob and Paul also shared that they are seeing a shift in behavior at more recent SEMICON events. That integration of design and manufacturing focus is starting to happen. Welcome to the new world of semiconductors.

Some Final Thoughts

I have just scratched the surface of the impact from the organization formerly known as EDAC. There is so much more to the story, and we will dig deeper in future posts. Before closing, I want to thank Bob and Paul for spending time with me. And thanks to Nanette Collins for making it happen.

I got to know both Bob and Paul a bit better during our conversation. It’s worth mentioning there are other, non-EDA sides to each of these folks. Bob Smith is also a co-founder of a winery called Jazz Cellars, located in Murphys, CA. I have fond memories of attending EDAC and ESDA events where Bob would pour wine from Jazz Cellars.

Paul is something of a photography geek. He wields equipment far more sophisticated than the latest iPhone with great results. These skills have been quite valuable over the years to chronicle all the terrific events that the organization has delivered.

If you’d like to learn more about ESDA and its role in SEMI, this is a good place to start.  And that’s how the name changes but the vision remains the same.


TSMC Process Simplification for Advanced Nodes

TSMC Process Simplification for Advanced Nodes
by Daniel Nenni on 02-22-2026 at 4:00 pm

TSMC Patent US10692720B2

In the modern world, the semiconductor industry stands at the heart of technological innovation. From smartphones and laptops to advanced medical devices and artificial intelligence systems, nearly every piece of contemporary electronics depends on increasingly sophisticated microchips. Among the leading companies driving this progress is Taiwan Semiconductor Manufacturing Co., Ltd. (TSMC), the world’s largest pure-play semiconductor foundry. Through continuous research, advanced manufacturing techniques, and aggressive scaling strategies, TSMC has played a pivotal role in pushing the boundaries of what is possible in chip fabrication.

Patent US10692720B2

As semiconductor technology advances, one of the most critical goals is scaling down device dimensions. Smaller transistors allow for higher device density, faster switching speeds, and lower power consumption. However, shrinking dimensions introduces immense engineering challenges. At technology nodes such as 5nm and beyond, even minute variations in patterning can significantly impact device performance and yield. Achieving precise control over distances between features, such as the “end-to-end” spacing between adjacent structures, becomes increasingly difficult as these distances approach tens of nanometers.

Traditional lithographic processes often require multiple patterning and etching steps to achieve extremely tight spacing. In earlier approaches, forming patterns with very small end-to-end distances might involve three separate lithography steps combined with multiple etching stages. Each additional step increases production time, cost, and the potential for alignment errors. Overlay inaccuracies between masks can lead to critical dimension variations, negatively affecting device reliability and manufacturing yield. Therefore, reducing the number of processing steps while maintaining or improving precision is a key objective in advanced semiconductor fabrication.

One important innovation involves using a single lithographic process combined with carefully engineered etching techniques to achieve sub-35 nm end-to-end distances. Instead of relying on multiple pattern transfers, this approach begins with forming unidirectional features in a photoresist layer using advanced lithography, such as EUV lithography. EUV uses very short wavelengths of light to define smaller features than previously possible with deep ultraviolet systems. By carefully designing the initial pattern and then applying a controlled angled etch process, the effective length of features can be modified without changing their width.

The angled etch technique is particularly significant. By directing ion beams at specific angles relative to the substrate surface, engineers can selectively trim or extend certain dimensions of patterned structures. For example, the length of a feature along one direction can be increased, thereby reducing the end-to-end spacing between neighboring features. This allows a final pattern to achieve tighter spacing than originally defined in the photolithography mask. Importantly, this method maintains the critical width dimension while adjusting only the desired axis, enabling precise dimensional control.

Such process optimization provides several advantages. First, it reduces the number of required lithography steps from three to one, cutting down cycle time and manufacturing costs. Lithography is one of the most expensive and time-consuming steps in semiconductor fabrication, so eliminating even a single lithography stage can yield substantial economic benefits. Second, fewer process steps reduce the risk of cumulative defects and misalignment errors, improving overall yield and device reliability. Third, streamlined processing enhances throughput in high-volume manufacturing environments, enabling faster delivery of advanced chips to market.

In devices such as FinFETs, which are widely used at advanced nodes, precise pattern control is especially crucial. FinFET architectures rely on three-dimensional channel structures that improve electrostatic control compared to planar transistors. However, their 3D geometry increases fabrication complexity. Maintaining consistent spacing between contacts, gates, and interconnects ensures proper electrical isolation and performance. Techniques that achieve tighter end-to-end distances without increasing process complexity directly support the continued scaling of FinFET and future transistor architectures.

Ultimately, innovation in semiconductor manufacturing is not just about making features smaller; it is about doing so efficiently, reliably, and economically. Companies like TSMC continue to invest heavily in process integration, materials engineering, and advanced patterning technologies to sustain progress beyond the 5nm node. By combining advanced lithography with creative etching strategies, the industry can overcome scaling barriers that once seemed insurmountable.

Bottom Line: As global demand for computing power grows driven by artificial intelligence, 5G communications, autonomous vehicles, and high-performance computing, the importance of such innovations will only increase. The ability to control nanometer-scale distances with extreme precision represents not just a technical achievement, but a foundational capability that shapes the future of modern technology.

Also Read:

TSMC and Cadence Strengthen Partnership to Enable Next-Generation AI and HPC Silicon

TSMC vs Intel Foundry vs Samsung Foundry 2026

TSMC & GCU Semiconductor Training Program: Preparing Tomorrow’s Workforce


CEO Interview with Juniyali Nauriyal of Photonect

CEO Interview with Juniyali Nauriyal of Photonect
by Daniel Nenni on 02-22-2026 at 2:00 pm

Juniyali

Juniyali Nauriyal is the CEO and Co-Founder of Photonect, a photonics startup focused on commercializing advanced fiber-to-chip attachment technologies.  Juniyali is the co-inventor of Photonect’s core technology, which forms the foundation of the company. As CEO, she leads Photonect in translating cutting-edge photonic packaging research into scalable, real-world solutions.

She has participated in prestigious accelerator programs including Activate, the Luminate Accelerator. Under her leadership, Photonect won the Grand Prize at the New York State Business Plan Competition (2022). Juniyali has received multiple competitive awards and scholarships, including the Corning Women in Optical Communications Scholarship (2022) from Optica, the SPIE Optics and Photonics Education Scholarship (2022), the Harvey M. Pollicove Memorial Scholarship (2018) from Optica, and the Best ISA Student Award (2016) from ISA–Maharashtra. She currently serves as Vice Chair of IEEE Women in Engineering (WIE) Rochester and was named Emerging Technology Woman of the Year (2025). Juniyali holds 7 patents and has authored 20 technical publications, including 2 first-author papers.

Tell us about your company?

Photonect is a Rochester-based startup addressing a critical bottleneck in photonic packaging: slow, costly, and unreliable fiber-to-chip attachment driven by epoxy. Attaching an optical fiber is as delicate as aligning two strands of hair, yet current approaches rely on glue. Photonect’s technology includes a patented chip architecture design called the oxide mode converter and laser fusion process that forms permanent glass-to-glass bonds in under a minute. This improves coupling efficiency from ~50% to ~80%, maintains <1 dB loss, increases throughput by 10×, and reduces per-device cost by 50%. This technology is being advanced into PIX-Attach, a Rochester-designed laser splicing system built for high-volume photonic integration set to launch at OFC 2026.

Please share a little bit more about how company was founded?

Photonect was founded by Dr. Juniyali Nauriyal as a spin-off from the University of Rochester, rooted directly in her doctoral research in integrated photonics. Originally an engineering student in India, Juniyali came to the U.S. to pursue an MS in Optics at the University of Rochester, where she joined the lab of Dr. Jaime Cárdenas. That collaboration evolved into her PhD work and eventually started Photonect itself.

During her PhD, Juniyali co-invented Photonect’s core technology along with Dr. Cárdenas, who is now the company’s CTO and Co-Founder. Rather than taking a traditional path into a large tech firm, Juniyali chose to commercialize her research to ensure it could deliver real-world impact. Her work led to a breakthrough approach to fiber-to-chip light coupling, improving efficiency while addressing the growing sustainability challenges driven by AI and data-center expansion.

Inspired by issues such as the rising energy demands of data centers and even their migration to colder regions to manage heat, Juniyali founded Photonect with a clear mission: to build high-performance photonic technologies that scale responsibly and support a more energy-efficient future.

What problems are you solving?

We address the limitations of epoxy-based fiber-to-chip attachment, which is slow, costly, and inherently unreliable. Today, epoxy is commonly used to connect/attach optical fibers to chips even though these joints are critical to data transfer efficiency, often resulting in up to 50% signal loss, increased heat generation, and long, labor-intensive assembly times that severely limit scalability. Our laser-assisted attachment technology replaces epoxy with a glass-to-glass bond, like soldering. This enables fiber-to-chip attachment in seconds instead of minutes, while delivering long-term reliability and stability across wide temperature ranges. In parallel, our proprietary mode converter technology significantly improves coupling efficiency, achieving <1 dB loss per chip facet and reducing overall link budget requirements by ~25%.

What Industries is your solution a good fit for ?

Our solution is a strong fit for industries undergoing a shift toward dense, high-performance optical integration, driven by the speed, power, and reliability demands of modern computing. This includes AI data centers and cloud infrastructure, high-performance computing, telecommunications, and emerging markets like quantum technologies. As copper and conventional photonic interconnects hit physical limits, Photonect’s technology delivers a step-change in performance, scalability, and energy efficiency, enabling these industries to keep pace with rapidly growing system demands.

What keeps your customers up at night?

Our customers worry about scaling photonic products to high volume without blowing up cost, power use, or yield. Today’s fiber-to-chip attachment is slow, energy-intensive, and requires specialized manpower, often forcing teams to choose between scalability and efficiency putting timelines, margins, and reliability at risk specially now as AI and data-center demand accelerates.
Q6.What does the competitive landscape look like and how do you differentiate?

In fiber-to-chip attachment landscape, existing solutions include wire bonding, fiber packaging with epoxy, and passive alignment approaches some players offer high-precision automated assembly systems that still rely on active alignment and adhesive bonding, none of which have been able to fully replace epoxy. Photonect takes a fundamentally different approach by adhesives through its proprietary laser-assisted attachment process. Our technology enables fiber-to-chip attachment in few seconds, achieves <1 dB optical loss, and delivers throughput of up to 60 attachments per hour.

In addition, our proprietary mode converter designs further reduce coupling losses by 15–20%, while maintaining high power compatibility and reliability across extreme temperature ranges.

What new features/technology are you working on?

We’re working on PIX-Attach, a next-generation laser-assisted, epoxy-free fiber-to-chip attachment platform that dramatically speeds up photonic packaging, cutting attach time to ~60 seconds per unit, enabling ~60 units per hour, improving alignment stability, reducing energy use, and significantly increasing production throughput while lowering the number of machines needed on the factory floor. Our value proposition lies in creating a premium and customized fiber attach equipment that fit’s the customers need. We are all set to launch this in mid-March 2026.

How do customers normally engage with your company?

We’re present at all major conferences such as OFC, Photonics West, and ECOC. You can also reach out to us directly at info@photonectcorp.com, or submit a query or request a meeting through our website.

 

CONTACT PHOTONETC

Also Read:

CEO Interview with Aftkhar Aslam of yieldWerx

CEO Interview with Elad Raz of NextSilicon

VSORA Board Chair Sandra Rivera on Solutions for AI Inference and LLM Processing


Podcast EP332: How AI Really Works – the Perspectives of Linley Gwennap

Podcast EP332: How AI Really Works – the Perspectives of Linley Gwennap
by Daniel Nenni on 02-20-2026 at 10:00 am

Daniel is joined by Linley Gwennap, technology analyst and author of the new book “How AI Really Works: The Models, Chips, and Companies Powering a Revolution.” Linley was the long-time editor of Microprocessor Report and chaired the popular Linley Processor Conferences.

Dan explores what impact AI is having on the market and the population with Linley, who offers some straight-forward explanations of how AI is impacting our world. Trends in AI model size are discussed, as well as competitive dynamics. Linley offers perspectives on how trends today and in the future will impact the market size and the key players. Trends in AI model size are also reviewed, along with some predictions about how AI will impact the overall workforce, both now and in the future.

You can get a copy of Linley’s new book here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


What is the 3nm Pessimism Wall and Why is it An Economic Crisis?

What is the 3nm Pessimism Wall and Why is it An Economic Crisis?
by Mike Gianfagna on 02-20-2026 at 8:00 am

What is the 3nm Pessimism Wall and Why is it An Economic Crisis?

Chip design is getting more difficult as technology advances. Everyone knows that. A lot of the discussion around these issues tends to focus on the demands posed by massive AI workloads and the challenges of shifting to heterogeneous multi-die design. While these create real problems, there is an underlying effect that is making the situation much worse than it needs to be: The ROI on advanced-node scaling is compressing in ways most teams do not yet quantify.

For three decades, Moore’s Law was an economic engine. Today, at 3nm and below, that engine is slowing. While foundries promise massive PPA (power, performance, and area) gains, the reality for most design teams is a “Performance Mirage.” Despite multi-billion-dollar investments in 3nm Gate-All-Around (GAA) and FinFET migrations, a large portion of the promised performance of these advances can be out of reach. It is often being sacrificed to “margin” reserved solely to compensate for modeling uncertainty. Let’s refer to this structural inflation of clock margin as the “Pessimism Wall”.

The good news is that this margin is not a law of physics.  It can be safely reclaimed and redirected toward real silicon limits. More on that shortly. But first let’s answer the question, what is the 3nm pessimism wall and why is it an economic crisis? The answer begins with understanding how margin accumulates – and why that accumulation has become economically consequential.

Anatomy of the Crisis

At 3nm, clock sign-off guard bands have exploded to 25–35% of the total clock period. This is not a choice; it is a structural consequence of abstraction-based sign-off methodologies. The following data highlights the mechanisms driving this structural margin inflation.

The data below reflects trends widely observed across advanced-node programs. While exact values vary by design, the structural pattern is consistent.

  • The 2.5x Over-Design Trap: Applying 28nm-era sign-off assumptions to 3nm designs forces designers to over-design clock networks by 2.5x. You are often paying for buffers, area, and routing complexity that the silicon does not need.
  • The Near-Threshold Danger Zone: As voltages approach device thresholds, delay behavior becomes exponential and non-linear. Standard static timing analysis (STA) over-linearizes these effects, forcing an “uncertainty tax” of 8–12% of the clock period just to remain “safe”.
  • The Jitter Black Hole: Power-supply-induced jitter (PSIJ) and simultaneous switching now consume 5–10% of the margin. Traditional tools treat this as a static guess.

All these effects hide useful margin behind the pessimism wall.

A Closer Look at the Pessimism Wall

Every picosecond of unnecessary margin has a direct impact on the project’s bottom line. The following table breaks down the contributors that can cumulatively drive total clock margin toward ~25–35% range:

The contributors above are individually defensible and grounded in advanced-node physics. What creates the pessimism wall is their cumulative stacking.

In abstraction-based sign-off flows, voltage sensitivity, jitter, aging, and variability are typically evaluated independently and conservatively. Worst-case assumptions stack because electrical interactions are not jointly resolved in time and voltage.

The silicon did not become 35% worse. Our abstractions became cumulatively more conservative. To be clear, the issue is not transistor device models themselves. The structural pessimism arises from abstraction-based timing methodologies and independently stacked worst-case assumptions that approximate electrical behavior rather than directly solving it.

The Economic Consequences – A Crisis in the Making

Leaving 10–15% recoverable clock margin on the table is not a modeling inconvenience – it can be a massive competitive liability. Let’s look a bit closer at what’s involved.

  • The Power Penalty: Because dynamic power scales with the square of voltage, a 10% reduction in margin translates to a ~18–20% reduction in dynamic clock power. Given that clock networks consume 30–40% of SoC power, this often determines whether a design leads its segment or thermally limits its own performance.
  • The Revenue Loss (SKU binning): Reclaiming ~10% margin enables a 300 MHz boost on a 3 GHz target. In high-volume production, shifting even 10% of volume into a premium performance bin can represent hundreds of millions of dollars in incremental revenue currently sacrificed to uncertainty.
  • Area Inefficiency: Abstraction-driven margin forces aggressive cell upsizing, leading to a 10–15% increase in clock tree area. This bloats die size and increases per-unit cost across millions of chips.
  • Field Failures: The industry’s reliance on broad “Guard Bands” actually increases risk:
    – Masked Failures – Broad margins “hide” specific electrical failures—like rail-to-rail or duty-cycle issues – until they hit the field.
    – Aging Roulette – Applying “Global Aging Taxes” ignores path-specific stress, leading to chips that pass tapeout but degrade prematurely in the field.

The Solution: Full-Clock Physics Enforcement

The crisis stems from one fact: Models have stopped keeping up with physics.

The most direct way to address structural pessimism is to replace timing abstractions and estimates with electrical resolution by performing detailed, accurate SPICE analysis on the entire clock. Up to now, this wasn’t practical for two reasons. First, standard SPICE runs on networks of this size would take an unreasonable amount of time and consume vast (and expensive) compute resources. And second, standard SPICE can’t even load networks of this size.

The good news is that these barriers are now gone. The ClockEdge Veridian suite delivers a family of SPICE-accurate analysis engines for timing, power, jitter, and aging. And Veridian delivers sign-off precision at real-world speed, revealing interactions that conventional flows miss. This enables full-clock waveform fidelity across timing, power, jitter, and aging interactions.

Veridian engines enable billion-transistor, unreduced SPICE analysis performed overnight. Some of the benefits of this include:

  • Eliminate Abstraction-Driven Guesswork: Enforce Kirchhoff’s Current and Voltage Laws across the entire netlist to eliminate table-lookup errors
  • Expose Hidden Failures: Veridian identifies rail-to-rail and duty-cycle failures that traditional STA “masks” with margin until it is too late
  • Path-Specific Aging: Stop applying global derates. Measure actual aged behavior to recover margin safely

The question is no longer whether the pessimism wall exists – physics proves it does. The question is whether your methodology is capable of exposing it before your competitor does.

 At advanced nodes, competitiveness is increasingly determined not by how much margin can be added, but by how much unnecessary margin can be safely removed.

The 3nm Pessimism Wall is not a silicon limitation – it is a modeling one.

The teams that resolve physics directly rather than approximate it will reclaim performance, power efficiency, and yield that others continue to surrender to uncertainty.

To Learn More

ClockEdge recently published a very informative white paper titled Reclaiming Margin in Advanced Nodes – Why Abstraction-Based Sign-Off Is Becoming the Dominant PPA Limiter at 3nm and Below.

This white paper is essentially a master class in how to preserve margin, performance and profits at advanced nodes. If you find yourself becoming a “slave” to ever-increasing design margins, this white paper is must-read. You can access your copy here. And that’s what the 3nm pessimism wall is and why it is an economic crisis.

Also Read:

The Risk of Not Optimizing Clock Power

Taming Advanced Node Clock Network Challenges: Jitter

Taming Advanced Node Clock Network Challenges: Duty Cycle


CEO Interview with Aftkhar Aslam of yieldWerx

CEO Interview with Aftkhar Aslam of yieldWerx
by Daniel Nenni on 02-20-2026 at 6:00 am

IMG 9364 3

Aftkhar Aslam is the Co-Founder and Chief Executive Officer of yieldWerx and a semiconductor industry veteran with more than 30 years of experience spanning manufacturing, test engineering, yield management, IP strategy, and enterprise digital transformation.

Under his leadership, yieldWerx has become a trusted data and yield analytics platform supporting semiconductor companies across fab, assembly, test, advanced packaging, photonics, and AI-driven device manufacturing. The platform enables organizations to unify fragmented manufacturing data into scalable, actionable yield intelligence.

Prior to founding yieldWerx, Aftkhar held senior leadership roles at Texas Instruments, where he served as Worldwide Director of Test & Yield Management Solutions & Director of Digital Transformation in the space of Design and Delivery systems and solutions across HW and SW.

He also served as a Director within Accenture’s Industry X (IX) practice, where he advised leading global technology organizations including Intel, GlobalFoundries, Qualcomm, Lam Research, Microsoft, STMicroelectronics, and Skyworks. His consulting work focused on bridging the Design-to-Manufacturing divide — architecting Digital Thread and Digital Twin strategies that connected product design, IP management, manufacturing execution, test, and enterprise systems into unified operational frameworks.

Aftkhar holds patents and possesses deep expertise in intellectual property management and protection. His experience spans semiconductor IP lifecycle governance, secure data architectures, and protecting high-value design assets across complex global supply chains.

Tell us about your company.

yieldWerx is a semiconductor-focused data and enterprise yield analytics platform. We help manufacturers unify data across fab, assembly, test, inspection, and advanced packaging into a single environment where engineers can extract real insight — not just generate reports.

What makes us different is that we tend to operate where the problems are hardest. We work with highly specialized, niche products and manufacturing flows — whether that’s heterogeneous integration, chiplets, co-packaged optics, MicroLED with billions of pixel-level data points, or silicon photonics requiring optical and electrical correlation. These aren’t simple wafer-yield problems; they’re multi-domain, multi-stage challenges that traditional tools struggle to handle.

We’re purpose-built for semiconductor manufacturing at extreme scale and extreme complexity. Our platform is designed to manage unconventional data models, massive datasets, and deep traceability requirements without breaking performance or usability.

At a high level, we help companies move from fragmented data silos to a connected digital thread — accelerating yield learning, reducing ambiguity, and enabling smarter, faster engineering decisions in some of the industry’s most advanced and specialized product environments.

What problems are you solving?

The biggest problem in semiconductor manufacturing today isn’t lack of data — it’s fragmentation.

Data lives in MES systems, testers, inspection tools, spreadsheets, homegrown databases, and separate analytics platforms. Engineers spend enormous time manually stitching it together before they can even begin root-cause analysis.

We solve that by unifying the data model and enabling cross-domain correlation — electrical + optical, wafer + module, socket + silicon, defect + yield, and so on.

Another major problem is scale. Modern devices generate massive datasets. Traditional tools weren’t designed for billions of data points. Ultimately, we reduce the time from anomaly detection to root cause — and that directly impacts yield, cost, and time-to-market.

What application areas are your strongest?

We’re strongest in environments where complexity is high and data volumes are extreme.

That includes:

  • Advanced packaging (2.5D/3D, chiplets, CPO)
  • Silicon photonics
  • MicroLED and display technologies
  • AI and high-performance compute devices
  • Automotive and high-reliability semiconductor manufacturing

Anywhere there’s multi-stage manufacturing with complex traceability requirements — that’s where we add the most value.

What keeps your customers up at night?

Three things:

  1. Yield ramp speed — especially for new technologies. Every week of delay is expensive.
  2. Escapes or overkill at test — failing good parts or shipping marginal ones.
  3. Lack of traceability when something goes wrong.

They worry about whether they truly understand where yield loss is originating — is it the wafer, the packaging step, the bonding process, the socket, the test program?

If the answer takes weeks to figure out, that’s a problem. Our goal is to make that answer visible in hours or days.

What does the competitive landscape look like and how do you differentiate?

There are traditional yield tools, BI tools, and homegrown systems.

Traditional yield tools often focus on wafer-level analysis but struggle with cross-domain traceability. BI tools are flexible but require heavy customization and don’t inherently understand semiconductor manufacturing.

We differentiate in three ways:

  1. Semiconductor-native data model — we understand wafers, panels, bonding, pixel maps, optical lanes, serialized modules.
  2. Extreme scalability — billions of records without performance degradation.
  3. Closed-loop capability — we don’t just visualize data; we enable correlation across design, manufacturing stages to drive actionable decisions.

We’re not just another dashboard — we’re the infrastructure layer for yield intelligence.

What new features or technology are you working on?

We’re expanding heavily into:

  • Pixel-level and device-level analytics for MicroLED and advanced displays
  • Optical + electrical unified analysis for photonics and CPO
  • Advanced spatial analytics and pattern recognition
  • AI-assisted anomaly detection and predictive yield modeling from the start of design
  • Deeper integration with test hardware and equipment for closed-loop optimization

We’re also strengthening genealogy and digital-thread capabilities to support next-generation packaging and heterogeneous integration.

The industry is moving toward system-level understanding, not just wafer-level — and that’s where we’re investing.

How do customers normally engage with your company?

Most engagements start with a specific pain point — slow yield ramp, fragmented data, lack of traceability, or scaling a new technology.

We typically begin with a focused pilot or proof-of-value around a real manufacturing dataset. Once customers see how quickly we can unify and analyze their data, the engagement expands into a broader enterprise deployment.

We also work closely with equipment providers, OSATs, and ecosystem partners, because yield today is collaborative — not isolated.

At the end of the day, we’re a long-term partner. Once we’re embedded in the manufacturing data flow, we become part of the operational backbone.

CONTACT yieldWerx

Also Read:

CEO Interview with Elad Raz of NextSilicon

VSORA Board Chair Sandra Rivera on Solutions for AI Inference and LLM Processing

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance


Intelligent Networks: Power, Reliability, and Maintenance in Telecom — Webinar Preview

Intelligent Networks: Power, Reliability, and Maintenance in Telecom — Webinar Preview
by Daniel Nenni on 02-19-2026 at 2:00 pm

Intelligent Networks semiwiki ads v7 400x400px

The upcoming webinar “Intelligent Networks: Power, Reliability, and Maintenance in Telecom” will focus on how telecommunications networks are adapting to growing demands for efficiency, resilience, and scalability. As telecom operators expand 5G deployments, integrate cloud-native architectures, and prepare for AI-driven services, the need for intelligent, automated network management has never been greater. This webinar aims to explore how intelligence embedded across the network can help operators address three critical challenges: power consumption, network reliability, and maintenance optimization.

One of the key topics to be addressed is power management in modern telecom networks. With increasing network densification, edge deployments, and higher-capacity equipment, energy usage has become a major operational and financial concern. The webinar will examine how intelligent networks can leverage real-time data, analytics, and automation to optimize power usage across sites and network elements. By aligning power consumption with traffic demand and operational conditions, operators can reduce energy waste, lower operational expenses, and support sustainability goals without compromising performance.

The webinar will also highlight reliability as a core requirement for next-generation telecom services. As networks support mission-critical applications, ranging from emergency communications to industrial automation, downtime and service degradation are no longer acceptable. Speakers are expected to discuss how intelligent networking technologies enable proactive reliability strategies, moving beyond traditional reactive fault management. Topics will include the use of advanced telemetry, AI-driven anomaly detection, and predictive analytics to identify potential failures before they impact service, helping operators maintain high availability and consistent quality of experience.

Another major focus of the session will be the evolution of maintenance practices in telecom environments. Conventional maintenance models, based on fixed schedules or manual inspections, can be inefficient and costly, particularly in large-scale, geographically distributed networks. The webinar will explore how intelligent networks support predictive and condition-based maintenance approaches. By continuously monitoring network health indicators such as power systems, cooling infrastructure, and hardware performance, operators can anticipate issues and intervene at the optimal time. This approach reduces unnecessary site visits, minimizes service disruptions, and extends the lifespan of critical assets.

Automation and orchestration are expected to be recurring themes throughout the discussion. As telecom networks grow in size and complexity, manual management becomes increasingly impractical. The webinar will examine how intelligent network platforms can automate routine tasks such as fault correlation, power optimization, and service recovery. Centralized visibility and intelligent orchestration enable operators to respond faster to issues, improve operational efficiency, and scale their networks with confidence.

In addition, the webinar will touch on network resilience and security, recognizing that reliability is not limited to physical infrastructure or power availability. As networks become more software-driven and interconnected, they also face greater exposure to cyber threats. Intelligent networks can enhance resilience by identifying abnormal behavior, supporting rapid mitigation, and maintaining service continuity in the face of both physical and digital disruptions.

Overall, “Intelligent Networks: Power, Reliability, and Maintenance in Telecom” is positioned to provide valuable insights into how intelligence, automation, and data-driven decision-making are shaping the future of telecom operations. By addressing power efficiency, reliability, and maintenance as interconnected challenges, the webinar will offer a holistic perspective on building networks that are more sustainable, resilient, and prepared for future demands. For telecom operators, vendors, and industry stakeholders, the session promises to outline practical strategies and emerging best practices for navigating the next phase of network evolution.

REGISTER HERE

Also Read:

Accelerating NPI with Deep Data: From First Silicon to Volume

Failure Prevention with Real-Time Health Monitoring: A proteanTecs Innovation

Thermal Sensing Headache Finally Over for 2nm and Beyond

 


Custom IC Design using Additive Learning

Custom IC Design using Additive Learning
by Daniel Payne on 02-19-2026 at 10:00 am

Additive learning engine

Custom IC design has demanding technical requirements to produce accurate simulation results for timing and power analysis in the shortest run times. EDA vendors have been rushing to use AI and ML technology to meet these analysis requirements. I attended a webinar from Siemens on accelerating iterative design cycles with Solido additive learning techniques to understand their approach to benefit custom IC designers.

Mohamed Atousa, Product Management Manager at Siemens started with an overview of their custom IC platform with tools spanning from schematic capture, variation-aware design, physical layout, library characterization, IP QA, and SPICE simulation to generative and agentic AI.

Within the Solido Design Environment are multiple analysis tools and the focus of this webinar was on the additive learning features used in PVTMC Verifier and the High-Sigma Verifier tools to achieve a 3X to 20X speed-up on incremental and iterative runs.

High-quality IC designs require variation-aware verification flows, yet the traditional Monte Carlo simulation methods run too slowly, but using extrapolation techniques cannot find outliers or model non-Gaussian behavior, and most simulation jobs are iterative. Making minor changes in a design or PDK require re-verification, creating verification flows that are too time consuming.

The new AI technologies enable SPICE-accurate variation-aware verification that is much faster than previous approaches. In the Solido PVTMC Verifier there is full coverage verification across PVT corners plus Monte Carlo, showing speedups of 2X to 10X, allowing it to find outliers that other methods can’t. AI used in Solido High-Sigma Verifier is able to produce 6-sigma yield verification in only thousands of circuit simulations, resulting in speed-ups from 1,000X to a Billion X faster than brute-force, all while maintaining SPICE accuracy.

Jayne Alexander, Technical Product Manager at Siemens talked next about how additive learning technology retains and reuses AI models to speed-up iterative workflows by 3X – 20X. In a traditional workflow it is common to verify a design, then have to re-verify because of changes in: PDK revision, transistor sizing, simulator versions, adding more corners, etc. With the new iterative workflow there is still the first run, but subsequent verification run times are much reduced as previous results are stored for re-use.

With the Solido Additive Learning Engine you experience accurate verification results every time, automatically. Here’s the internal flow chart to achieve this benefit.

This additive learning technology retains and reuses AI models from previous jobs to speed up iterative runs, making it fast and accurate while automating the process, while requiring no user-input or AI knowledge for the tool users. Under the hood there is an AI datastore that is designed to be light-weight and optimized, supporting multiple users at once, all while using small disk space.

From Microchip we heard from Amit Bansal, Technical Staff Engineer and their old iterative design process was taking 20-30 days. They ran PVTMC Verifier on bandgap reference circuits and RC oscillators, with and without Additive Learning (AL). For the bandgap reference circuit they made a design change on the pre-layout netlist then verified at 3 sigma across 21 PVT corners. The results were 3.7X fewer simulations, 4.1X wall clock speedup:

  • Base run – 885 simulations, 2hr 16min

  • AL off – 1,170 simulations, 3hr 24 min

  • AL on – 315 simulations (3.7X), 49min (4.1X)

In the RC oscillator example they changed the trim cap value in the post-layout netlist then verified at 3 sigma across 1 PVT corner. More impressive results with 20X fewer simulations, 18X wall clock speedup:

  • Base run – 300 simulations, 14 hr 16 min

  • AL off – 300 simulations, 11hr 40min

  • AL on – 15 simulations (20X), 39min (18X)

At DAC 2025 Microchip and Siemens presented their results in the Poster Gladiator Competition and won.

Summary

With verification times taking too long for custom IC designs with many iterations there has to be a better way than using brute-force Monte Carlo simulations and then waiting for answers. Thanks to AI-powered features like additive learning, the Solido tools have been shown to reduce the number of simulations by producing faster analysis answers, all while maintaining accuracy. Users can be productive quickly with no learning curve or AI expertise required, it’s out of the box easy with minimal to no user input required.

Watch the archived webinar online here.

Related Blogs


SiFive’s AI’s Next Chapter: RISC-V and Custom Silicon

SiFive’s AI’s Next Chapter: RISC-V and Custom Silicon
by Daniel Nenni on 02-18-2026 at 2:00 pm

AI’s Next Chapter RISC V and Custom Silicon

In the rapidly evolving world of artificial intelligence and semiconductor design, open-standard processor architectures are gaining unprecedented traction. At the center of this shift is SiFive, a company founded by the original creators of the RISC-V ISA, which champions an open, extensible, and license-free alternative to proprietary architectures like x86 and Arm. A webinar titled “SiFive AI’s Next Chapter: RISC-V and Custom Silicon” encapsulates the company’s vision for how RISC-V and tailored silicon platforms will power the next wave of AI innovation, from edge devices to large-scale data centers.

The Strategic Importance of RISC-V for AI

At its core, RISC-V is a modular ISA that lets designers choose only the instruction subsets they need, and extend the base set with custom extensions suited to their applications. This openness dramatically reduces barriers to entry and enables highly specialized designs that can be optimized for power, performance, and area, crucial for AI and machine learning workloads. Unlike closed ISAs where licensing fees and fixed capabilities constrain flexibility, RISC-V allows custom silicon to be tailored from the ground up for specific AI use cases, from inference at the edge to large model training in the cloud.

A webinar on this subject would likely begin by framing why open ISAs are now receiving serious attention: as AI workloads grow in size and complexity, traditional CPU designs can become bottlenecks. Custom silicon chips designed with specific AI functions built directly into the silicon can accelerate key operations like matrix multiplication, tensor processing, and low-precision arithmetic. RISC-V’s flexible ISA makes it easier to implement such features efficiently. Moreover, as traditional leaders in processor design (like Arm) face increasing licensing constraints or strategic shifts, an open foundation like RISC-V offers an attractive alternative for companies wanting to future-proof their hardware roadmaps.

Custom Silicon: Tailoring Processors to AI Workloads

The “custom silicon” part of the webinar title refers to the creation of chips specifically architected for particular AI demands. Rather than using generic CPUs and off-the-shelf components, custom silicon can embed accelerators, optimize memory hierarchies, and integrate unique instruction extensions that speed up AI computations at lower energy consumption. In a field where efficiency and performance per watt are critical, these gains matter.

For example, SiFive’s own products, such as its Intelligence and Performance families, integrate vector and matrix computation units into RISC-V CPUs, allowing these cores to act as accelerator control units that manage AI workloads more efficiently than general-purpose processors alone. This approach drastically reduces overhead and can enable better AI performance on devices ranging from autonomous sensors to cloud servers.

Another important theme is ecosystem enablement. A custom silicon strategy only succeeds if a robust toolchain, including compilers, libraries, and runtime support, enables developers to target these designs. SiFive and partners have been building out support for major AI frameworks and compilers so that developers can deploy models efficiently on RISC-V platforms without sacrificing software compatibility or developer productivity.

Industry Collaboration and Co-Design

Webinars on RISC-V often include discussions about ecosystem partnerships and co-design approaches. For instance, recent announcements highlight collaborations between SiFive and companies such as NVIDIA, integrating technologies like NVLink to enable coherent, high-bandwidth CPU-to-accelerator communication, a major innovation for AI data centers where latency and bandwidth can dramatically impact scaling and throughput.

Similarly, the adoption of RISC-V by other ecosystem players, including major cloud providers and AI accelerator developers, underscores a broader industry shift toward heterogeneous computing architectures where CPUs, GPUs, and custom accelerators work in concert. These partnerships demonstrate how open ISAs and custom silicon are no longer niche, they are becoming central to next-generation AI infrastructure design.

Takeaways for Developers and Architects

A webinar like “SiFive AI’s Next Chapter: RISC-V and Custom Silicon” serves multiple audiences: hardware architects seeking insights on cutting-edge silicon design; software developers interested in how AI workloads can be optimized on RISC-V; and industry strategists evaluating open standard architectures against incumbent designs. Key takeaways would include:

  • How RISC-V’s modular ISA facilitates tailored processor designs for specific AI models and workloads.
  • The advantages of custom silicon in boosting performance and efficiency for AI machine learning functions.
  • Case studies or technical deep dives showing how SiFive’s RISC-V IP can be applied across edge, embedded, and data center use cases.
  • A look into emerging collaborations and ecosystem developments that broaden the practical applicability of RISC-V.

Bottom Line: This webinar represents not just a technical briefing but a reflection of a broader industry narrative: open, customizable hardware built on RISC-V is steadily transforming the AI computing landscape. As AI models continue to grow in complexity and deployment scenarios diversify, processor architectures that offer flexibility, efficiency, and extensibility, hallmarks of RISC-V and custom silicon , are set to play a foundational role in the future of AI.

Also Read:

SiFive to Power Next-Gen RISC-V AI Data Centers with NVIDIA NVLink Fusion

Tiling Support in SiFive’s AI/ML Software Stack for RISC-V Vector-Matrix Extension

RISC-V Extensions for AI: Enhancing Performance in Machine Learning


Smarter IC Layout Parasitic Analysis

Smarter IC Layout Parasitic Analysis
by Daniel Payne on 02-18-2026 at 10:00 am

ParagonX flow

IC layout parasitics dominate the performance of custom digital, analog and mixed-signal designs, so the challenge becomes how to identify the root causes and to quantify the effects of parasitics during early design stages. The old method of iterating between layout, extraction, SPICE simulation, followed by manual debug and analysis is just too slow and error prone to be relied upon. A smarter approach has been developed with an EDA tool called ParagonX from Synopsys, so I attended a recent webinar to become more informed. ParagonX came from start-up Diakopto which was acquired by Ansys, then Synopsys acquired Ansys.

Rob Dohanyos of Synopsys opened up the webinar with an overview of ParagonX and then most of the time was spent in a live demo, something kind of rare with EDA vendors. The ParagonX tool can be used by a circuit designer to analyze, debug, visualize and even optimize their IC layout parasitics for any technology node and any circuit design style. Typical users of the tool are designing high-speed or high-precision circuits, like: SERDES, optical transceivers, ADC, DAC, SRAM, clocks. Parasitic sensitive designs also benefit from this analysis: Analog, power nets, PMICs, ESD networks, guard rings. Smaller nodes benefit even more from ParagonX, all the way down to 3nm. Instead of spending weeks and months debugging parasitic effects, a designer can reduce that time to hours or minutes.

Circuit designers and layout engineers will find the new tool to be easy to use out of the box, with quick run times, while providing insights to any problem areas. Here’s where ParagonX fits into your existing design and layout flow:

This tool can accept netlists that are hundreds of gigabytes in size and analyze nets that have hundreds of millions nodes or resistors, all made possible through it’s own binary netlist database. Users can expect fast netlist loading for top-hierarchy analysis, including power net analysis all while using the interactive GUI. There are six basic analysis features in ParagonX:

Invoking the tool brought up a GUI with choices grouped by analysis features, then they loaded a netlist in a few seconds. All nets could be traversed with a hierarchical widget, plus searching for nets with wildcards made it fast to find a specific net. Point to point (P2P) resistance was shown by selecting start and end points, and then the resistance contributions were displayed by layer type and percentage contribution.

The top four layers had the most resistance, so next they showed sensitivity analysis with visualization of the layout by color-coded resistance.

Each of the many functions in the ParagonX tool are actually Pythons scripts that you can customize to fit your own analysis needs. From the command line of the tool you can see each script being used for a particular function. A function called Rview shows the resistance to all points on one net, and this got 1,000 times faster in the latest tool release. Rob ran Rview on a sample net VIN, showing 240 instance pins with a resistance distribution of 100 to 450 ohms.

Next in the demo the capacitive coupling on a single net was run on net VIN and all aggressor nets, showing that VSS had the most coupling from VIN to diffusion. Metal 3 had the highest coupling on VIN with CLOCK S. Users can tell which nets and layers are causing the most coupling, both numerically and visually.

RC delay was another function demonstrated with a start point of VIN to all instance points, then a device characterization file was generated for use with a specific SPICE tool. Sensitivity of RC delay to parasitics was the next function displayed using a rainbow of colors.

The net matching function was run on a differential pair, VIN and VIP, where the user set the matching criteria and simulation results showed layout areas in red that were not matching, while areas in green were matching properly. Several other functions were demonstrated: Layout Parasitic Screener, What-if analysis, victim noise analysis, glitch analysis.

Summary

Circuit and layout designers now have a much smarter way to quickly analyze IC layout parasitics, find the root cause and even begin to optimize their designs by using the ParagonX tool in their flow. This really is a new EDA tool category and will benefit custom-digital, analog and mixed-signal design projects.

Watch the archived webinar online.

Related Blogs