CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Advancements in 3D Stacked IGZO 2T0C DRAM for Computing-in-Memory Applications

Advancements in 3D Stacked IGZO 2T0C DRAM for Computing-in-Memory Applications
by Admin on 06-10-2025 at 8:00 am

3D stacked IGZO 2T0C DRAM array with multibit capability for computing in memory applications

In the rapidly evolving field of artificial intelligence (AI), the demand for efficient data processing has exposed limitations in traditional memory technologies. The paper “3D Stacked IGZO 2T0C DRAM Array with Multibit Capability for Computing in Memory Applications,” published in Science Advances on May 23, 2025, by Qijun Li, Qianlan Hu, Shenwu Zhu, Min Zeng, Wenjie Zhao, and Yanqing Wu, addresses these challenges through innovative use of indium gallium zinc oxide (IGZO) in dynamic random access memory (DRAM). This work demonstrates a 3D stacked, capacitorless DRAM array that promises higher density, longer data retention, and enhanced energy efficiency, particularly for computing-in-memory (CIM) paradigms.

Fig. 1. Artificial network and 3D 2T0C DRAM architecture. (A) Neuromorphic network for handwritten digit recognition. The processed feature maps are predicted through a fully connected neural network with int4 weights. (B) Schematic diagram of 3D 2T0C DRAM structure. The TR is on the first layer, and the TW is on the second layer. (C) Circuit diagram of the proposed 3D DRAM cell array; two DRAM cells form an int4 cell. First DRAM cell stores 3-bit data, and the second stores plus/minus data. (D) Scanning TEM and EDS mapping of 3D DRAM cell. The channel length of the TR is 270 nm, and the TW is 180 nm. The thickness of the isolation layer is 100-nm SiO2.

Traditional DRAM, typically structured as one-transistor-one-capacitor (1T1C), suffers from high power consumption due to frequent refresh cycles necessitated by charge leakage. This issue is exacerbated in AI applications involving matrix operations for tasks like image recognition, where data movement between memory and processors creates a “memory wall.” The authors highlight how emerging memories such as spin-transfer torque magnetic RAM (STT-MRAM), resistive RAM (RRAM), phase-change RAM (PCRAM), and ferroelectric RAM (FeRAM) offer alternatives but fall short in cycling endurance, speed, or integration complexity. IGZO-based two-transistor-zero-capacitor (2T0C) DRAM emerges as a superior option due to its ultralow off-state leakage current—on the order of femtoamperes—which enables retention times exceeding 100 seconds without refresh, drastically reducing energy use.

The innovation lies in the monolithic 3D stacking enabled by IGZO’s low thermal budget, compatible with back-end-of-line (BEOL) processes. This allows vertical integration beyond planar scaling limits, increasing bit density. The paper details the fabrication of an 8 by 8 array using advanced techniques like electron beam lithography (EBL) for gate patterning, atomic layer deposition (ALD) for high-κ dielectrics, and reactive magnetron sputtering for the amorphous IGZO channel. The process involves layering read transistors (TRs) and write transistors (TWs) with interconnections via dry etching and metal filling, ensuring electrical isolation with SiO2 insulators. Electrical characterization was performed at room temperature in a vacuum environment using a Keysight B1500A analyzer, confirming optimized performance.

Key results showcase the array’s multibit capability, achieving 3-bit storage per cell with retention over 100 seconds. This is a significant leap, as multibit storage amplifies density and efficiency for AI workloads. The authors map a convolutional neural network (CNN) for handwritten digit recognition onto the array, where an 8 by 8 feature map from the convolutional layer is stored and processed in-memory. Each cell handles int4 weights, enabling vector-matrix multiplication directly within the memory, bypassing data shuttling. Simulations yield an impressive 94.95% accuracy on the MNIST dataset, demonstrating practical CIM viability. Compared to conventional architectures, this approach enhances energy efficiency by minimizing refresh operations and data transfers.

The discussion emphasizes IGZO 2T0C DRAM’s advantages over competitors. While 1T1C DRAM offers high speed and endurance, its short retention (milliseconds) demands constant power. IGZO’s long retention supports nonvolatile-like behavior in volatile memory, ideal for edge AI devices with power constraints. The 3D stacking addresses density bottlenecks, potentially scaling to larger arrays for complex neural networks. However, challenges remain, such as optimizing write/read disturbances and scaling fabrication for commercial viability. The authors suggest future integrations with logic circuits for fully embedded CIM systems.

In conclusion, this research paves a promising pathway for overcoming the memory wall in AI computing. By leveraging IGZO’s unique properties, the 3D stacked 2T0C DRAM array not only extends retention and density but also enables efficient in-memory computations, heralding a shift toward more sustainable and powerful AI hardware. As edge devices proliferate, such innovations could transform applications from autonomous vehicles to wearable tech, reducing global energy consumption in data centers. With further refinements, IGZO-based memories may redefine the DRAM roadmap, blending high performance with low power in an era of exponential data growth.

Also Read:

PDF Solutions Adds Security and Scalability to Manufacturing and Test

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation


Analog Bits at the 2025 Design Automation Conference #62DAC

Analog Bits at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-10-2025 at 6:00 am

Analog Bits at the 2025 Design Automation Conference

Analog Bits attends a lot of events. I know because I see them a lot in my travels. Lately, the company has been stealing the show with cutting-edge analog IP on a broad range of popular nodes and a strategy that will change the way design is done. Analog Bits is quietly rolling out a new approach to system design. One that delivers a holistic approach to power management during the architectural phase of design The company believes this is the only way to achieve the required power and performance for demanding next-generation AI systems.

In their words, “Analog Bit is the leading energy management IP company, making power safe, reliable, observable and efficient.” There is a lot to unpack in that statement, and a lot to see at the Analog Bits booth #1320 at DAC. Let’s look at the IPs Analog Bits has available across several foundries. There are many more in the works.

TSMC 2nm

Analog Bits recently completed a successful second test chip tapeout at 2nm, but the real news is the company will be at DAC with multiple working analog IPs at 2nm. A wide range PLL, PVT sensor, droop detector, an 18-40MHz crystal oscillator, and differential transmit (TX) and receive (RX) IP blocks will all be on display.

TSMC 3nm

Four power management IPs from TSMC’s CLN3P process will also be demonstrated. These include a scalable low-dropout (LDO) regulator, a spread spectrum clock generation PLL supporting PCIe Gen4 and Gen5, a high-accuracy thermometer IP using Analog Bits patented pinless technology, and a droop detector for 3nm.

Other TSMC Nodes, 4nm to 0.18u

Analog Bits has been an OIP partner with TSMC since 2004 and has a large portfolio of clocking, sensors, SERDES and IO IP’s. You can check out the availability at the company’s product selector website at https://www.analogbits.com/product-selector/.

GlobalFoundries 12LP, 12LP+, 22FDX

An integer PLL, FracN/SSCG PLL, PCI G3 ref clock PLL, PVT sensor, and power on reset are all available in both GF 12LP and 12LP+. A PCI G4/5 ref clock PLL is available on GF 12LP+. A broad array of automotive IP is also available in GF 22FDX including voltage regulators, power on reset, PCT sensors and IO’s.

Samsung 4LPP, 8LPU and 14LPP

An integer PLL, PVT Sensor, power on reset, and droop detector are available on Samsung 4LPP. An automotive grade PLL, PVT sensor, and PCIe Gen4/5 SERDES are available on Samsung 8LPP/8LPU. An integer PLL, PVT sensor, and PCI Gen4 SERDES are available on Samsung 14LPP.

About the Intelligent Power Architecture

Optimizing performance and power in an on-chip environment that is constantly changing with on-chip variation and power-induced glitches can be a real headache. Multi-die design compounds the problem across many chiplets.

The Analog Bits view is that this problem cannot be solved as an afterthought. Plugging in optimized IP or modifying software late in the design process won’t work. The company believes that developing a holistic approach to power management during the architectural phase of the project is the answer.

So, Analog Bits is rolling out its Intelligent Power Architecture initiative. There is a lot of IP and know-how that work together to make this a reality. If power optimization is a challenge, you should stop by booth #1320 at DAC and see what solutions are available from Analog Bits.

To Learn More

You can find extensive coverage of Analog Bits on SemWiki here.  You can also visit the company’s website to dig deeper. See you at DAC.

DAC registration is open.

Also Read:

Analog Bits Steals the Show with Working IP on TSMC 3nm and 2nm and a New Design Strategy

2025 Outlook with Mahesh Tirupattur of Analog Bits

Analog Bits Builds a Road to the Future at TSMC OIP


SoC Front-end Build and Assembly

SoC Front-end Build and Assembly
by Daniel Payne on 06-09-2025 at 10:00 am

SoC Compiler flow min

Modern SoCs can be complex with hundreds to thousands of IP blocks, so there’s an increasing need to have a front-end build and assembly methodology in place, eliminating manual steps and error-prone approaches. I’ve been writing about an EDA company that focuses on this area for design automation, Defacto Technologies, and we met by video to get an update on their latest release of SoC Compiler, v11.

With SoC Compiler an architect or RTL designer can integrate all of their IP, auto-connect some blocks, define which blocks should be connected, and create a database for simulation and logic synthesis tools. Both the top-level and subsystems can be built, or you can easily restructure your design before sending it to synthesis. Using SoC Compiler ensures that design collaterals such as UPF, SDC and IP-XACT are coherent  with RTL. Here’s what the design flow looks like with SoC Compiler.

Another use of the Defacto tool is when physical implementation needs to be linked to RTL pre-synthesis. More precisely, when Place and Route of all the IP blocks isn’t fitting within the area goal, you capture the back-end requirements and create a physically-aware RTL to improve the PPA during synthesis, as the tool also has power and clock domain awareness. When building an SoC it’s important to keep all of the formats coherent: IP-XACT, SDC, UPF, RTL. Using a tool to keep coherence saves time by avoiding manual mistakes and miscommunications.

In the new v11 release there has been a huge improvement in runtime performance, where customers report seeing an 80X speed-up to generate new configurations. This dramatic speed improvement means that you can try out several configurations per day, resulting in faster time to reach PPA goals. What used to take 3-4 hours to run, now takes just minutes.

One customer of Defacto had an SoC design with 925 IP blocks, consisting of 4,900 instances, 5k bus interface connections, and 65k ad hoc connections, where the runtime to make a complete integration completed in under just one hour.

V11 includes IP-XACT support and management of: TGI,  Vendor Extensions, multi-view. The latest UPF 3.1 is supported. Improvements to IP-XACT include support of parameterized add_connection, and Insert IP-XACT Bus Interface (IIBI).

There’s even some new AI-based features that improve tool usability and code generation tasks. You can use your own LLM or engines, and there’s no requirements to train the AI features.

Users of SoC Compiler can run the tool from the command line, GUI, or even use an API in Tcl, Python or C++ code. Defacto has seen customers use their tool in diverse application areas: HPC, security, automotive, IoT, AI. The more IP blocks in your SoC project, the larger the benefits of using SoC Compiler are. Take any existing EDA tool flow and add in the Defacto tool to get more productive.

Summary

During the past 17 years the engineering team at Defacto has released 11 versions of the SoC Compiler tool to help system architects, RTL designers and DV teams become more efficient during the chip assembly process. I plan to visit Defacto at DAC in booth 1527 on Monday, June 23 to hear more from a customer presentation about using v11.

Related Blogs


Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future

Siemens EDA Outlines Strategic Direction for an AI-Powered, Software-Defined, Silicon-Enabled Future
by Kalar Rajendiran on 06-09-2025 at 6:00 am

Software defined Systems of Systems

In a keynote delivered at this year’s Siemens EDA User2User event, CEO Mike Ellow presented a focused vision for the evolving role of electronic design automation (EDA) within the broader context of global technology shifts. The session covered Siemens EDA’s current trajectory, market strategy, and the changing landscape of semiconductor and systems design. Since Mentor Graphics became part of Siemens AG, the User2User event has become the annual opportunity to gain holistic insights into the company’s performance and strategic direction.

Sustained Growth and Strategic Investment

Siemens EDA has demonstrated strong growth over the past two years, both in revenue and market share. The company has responded by increasing R&D investment and expanding its portfolio. Notably, over 80% of new hires in fiscal year 2024 were placed in R&D roles, underscoring a strategic emphasis on product and technology development.

This growth comes during a period of industry consolidation and transformation. Without its own silicon IP offerings, Siemens is reinforcing its position around full-flow EDA, advanced simulation, and systems engineering. These areas are seen as key differentiators in a market where integration across domains is increasingly essential.

Extending Beyond Traditional EDA

Mike outlined Siemens’ expanding footprint into areas traditionally considered outside the core EDA domain. The $10.5 billion acquisition of Altair, a multiphysics simulation company, along with strategic moves into mathematical modeling, reflects a long-term strategy aimed at enabling cross-domain digital engineering. These capabilities are becoming increasingly important as products evolve into complex cyber-physical systems.

The company’s parent, Siemens AG, continues to invest heavily in digitalization, simulation, and lifecycle solutions. EDA now plays a central role in this technology stack, bridging the gap between silicon and the broader systems in which it operates.

Software-Defined Systems and AI as Central Drivers

At the heart of Siemens’ vision is the recognition that software is now the primary driver of differentiation. This shift means traditional hardware-led design processes must be restructured. The industry is moving toward a software-defined model, where silicon must be architected to support flexible, updatable, software-driven functionality.

This transition includes integrating AI directly into the design process—both as a capability within the tools and as a requirement for the end products. AI is accelerating demand for compute and increasing design complexity, but it also enables new methods of automation in verification, synthesis, and optimization. Siemens EDA is investing on both fronts: helping customers build silicon for AI, while embedding AI into its own design tool flows.

Multi-Domain Digital Twins

In today’s cyber-physical products—such as electric vehicles or industrial control systems—software and hardware must co-evolve in lockstep. The traditional handoff model, where completed hardware designs are passed to software teams, often results in inefficiencies and functional mismatches.

Instead, Siemens is promoting the use of multi-domain digital twins—integrated system models that span electrical, mechanical, manufacturing, and lifecycle domains. These models enable real-time collaboration and help prevent costly late-stage trade-offs. For example, a software update could inadvertently impact braking, weight distribution, and overall performance, resulting in a significant drop in range. A tightly coupled digital twin helps identify and mitigate such cascading effects before deployment.

Silicon Lifecycle Management and Embedded Monitoring

Beyond early design, Siemens is advancing silicon lifecycle management (SLM) by embedding monitors directly into chips to collect real-world operational data throughout their lifespan. This telemetry, feeding continuously into the digital twin, enables predictive maintenance, lifecycle optimization, and performance tuning as systems age.

This approach transforms silicon from a static component into a dynamic asset. Over-the-air updates, anomaly detection, and usage-aware software adaptation become feasible, improving product reliability and long-term value.

AI Infrastructure and Secure Data Lakes

To manage the escalating complexity of software-defined, AI-powered electronics, Siemens is building a robust AI infrastructure anchored in secure data lakes. These repositories aggregate verified design, simulation, and test data while maintaining strict access control—crucial for IP protection.

Domain-specific large language models (LLMs) and AI agents are being trained on this data to automate tasks such as script generation, testbench development, and design space exploration. Siemens is developing a unified AI platform to further support automation, decision-making, and cross-domain intelligence throughout the design lifecycle. The platform will be formally announced in the months ahead.

3D IC, Advanced Packaging, and Enterprise-Scale EDA

A key focus is the rise of 3D ICs and heterogeneous integration, from chiplets to PCB-level packaging. Siemens is enhancing its toolsets to support the convergence of digital and analog design, using AI-driven workflows to increase scalability and accuracy in these complex architectures.

These initiatives support Siemens’ broader push toward enterprise-scale EDA—democratizing access to advanced design tools through cloud platforms. These environments empower distributed teams, including less-experienced engineers, to collaborate on sophisticated designs. AI-powered automation bridges skills gaps, accelerates time-to-market, and enhances design quality.

Navigating Geopolitics and Sustainability

Mike also addressed external forces reshaping the semiconductor industry, including geopolitical pressures and the growing need for sustainability. Regionalization is accelerating, as countries invest in domestic design and manufacturing to mitigate supply chain risks and safeguard IP.

Meanwhile, AI and ubiquitous connectivity are driving compute demands beyond traditional energy efficiency gains. Siemens EDA is responding with low-power design methodologies, energy-efficient architectures, and system-wide optimization strategies that combine AI with simulation to reduce power consumption.

Summary

The central message of the keynote was that the future of electronics is AI-powered, software-defined, and silicon-enabled. For EDA providers, this means going beyond traditional design boundaries toward a full-stack, lifecycle-aware development model that integrates software, systems, and silicon from the outset.

Siemens EDA is positioning itself as a leader in this transformation—through comprehensive digital twins, embedded silicon lifecycle management, secure AI infrastructure, and cloud-enabled, democratized design platforms.

Also Read:

EDA AI agents will come in three waves and usher us into the next era of electronic design

Safeguard power domain compatibility by finding missing level shifters

Metal fill extraction: Breaking the speed-accuracy tradeoff


Cadence at the 2025 Design Automation Conference #62DAC

Cadence at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-08-2025 at 10:00 am

62nd DAC SemiWiki

Cadence, a DAC 2025 industry sponsor, will exhibit in booth 1609 at the 62nd Design Automation Conference at San Francisco’s Moscone West Convention Center.

Highlights:

Paul Cunningham, SVP and GM of the System Verification Group, Cadence, will speak at Cooley’s DAC Troublemaker Panel. This discussion will be an open Q&A covering interesting and even controversial EDA topics. Monday, June 23, 3:00pm – 4:00pm, DAC Pavilion, Exhibit Hall, Level 2

Cadence will be at the DAC Chiplet Pavilion hosted by EE Times on Level 2, Exhibit Hall Booth 2308:

David Glasco, VP of the Compute Solutions Group, Cadence, will participate in a panel discussion, “Developing the Chiplet Economy.” The commercial chiplet ecosystem is rapidly evolving, driven by the need for greater scalability, performance, and cost efficiency. However, its growth is challenged by the lack of standardized interfaces, industry-wide collaboration, and the complexity of integrating chiplets from multiple vendors. This session will explore the readiness of advanced packaging technologies, the role of design tool vendors, silicon makers, and IP providers, and the collaborative efforts required to establish a thriving chiplet economy. Tuesday, June 24, 2:00pm – 2:55pm.

Brian Karguth, distinguished engineer, Cadence, will present “Cadence SoC Cockpit: Full Spectrum Automation for Chiplet Development.” The semiconductor industry is undergoing a transformation from traditional monolithic system-on-chip (SoC) architectures to modular, chiplet-based designs. This strategic shift is essential to mitigate complexities associated with scaling designs, optimize yields, and address rising fabrication costs driven by increasing transistor costs. To address these challenges, Cadence is offering a full set of chiplet development solutions, including our new Cadence SoC Cockpit, which aims to streamline and optimize the development of next-generation chiplet and system in package (SiP) designs. Learn about Cadence SoC Cockpit and its use for accelerating SoC designs. Tuesday, June 24, 3:50pm – 4:10pm.

Powering the Future: Mastering IEEE 2416 System-Level Power Modeling Standard for Low-Power AI and Beyond: Daniel Cross, senior principal solutions engineer, Cadence, will present a tutorial that will provide attendees with a comprehensive understanding of the IEEE 2416 standard, which is used for system-level power modeling in the design and analysis of integrated circuits and systems. Participants will gain the practical knowledge necessary to implement and utilize the standard effectively. The tutorial will highlight the pressing need for low-power design methodologies, particularly in cutting-edge fields like AI, where computational demands are high. Sunday, June 22, 9:00am – 12:30pm.

Vinod Kariat, CVP and GM of the Custom Products Group, Cadence, will participate in a panel discussion, “The Renaissance of EDA Startups,” on Tuesday, June 24, 2:30pm – 3:15pm.

Cadence will present a series of posters with GlobalFoundries, Intel, IBM, NXP, Samsung, and STMicroelectronics on Tuesday, June 24, 5:00pm – 6:00pm.

A complete list of Cadence activities at DAC can be found at Cadence @ Conference – Design Automation Conference 2025.

Cadence recruiters will be at the DAC Career Development Day on Tuesday, June 24, 10:00am – 3:30pm, inside the entrance of the Exhibit Hall on Level 1. Members of the DAC Community who are considering a job change or a new career opportunity are encouraged to complete an application and upload a résumé/CV, which will be shared in advance with participating employers. Attendees may stop by at any time on Tuesday between 10:00am and 3:30pm to speak with employers.

To arrange a meeting with Cadence at DAC 2025: REQUEST MEETING

DAC registration is open.

Also Read:

Verific Design Automation at the 2025 Design Automation Conference

ChipAgent AI at the 2025 Design Automation Conference

proteanTecs at the 2025 Design Automation Conference

Breker Verification Systems at the 2025 Design Automation Conference


Verific Design Automation at the 2025 Design Automation Conference

Verific Design Automation at the 2025 Design Automation Conference
by Lauro Rizzatti on 06-08-2025 at 8:00 am

62nd DAC SemiWiki

Rick Carlson, Verific Design Automation’s Vice President of Sales, is an EDA trends spotter. I was reminded of his prescience when he recently called to catch up and talk about Verific’s role as provider of front-end platforms powering an emerging EDA market.

Verific, he said, is joining forces with a group of well-funded startups using AI technology to eliminate error-prone repetitive tasks for efficient and more productive chip design. “We’re in a new space where no one is sure of the outcome or the impact that AI is going to have on chip design. We know there are going to be some significant improvements in productivity. It’s going to be an amazing foundation.”

I was intrigued and wanted to learn more. Rick set up a call for us to talk with Ann Wu, CEO of startup Silimate, an engaging and articulate spokesperson for this new market. Silimate, one of the first companies to market, is developing a co-pilot (chat-based GenAI) for chip and IP designers to help them find and fix functional and PPA issues. Impressively, it is the first EDA startup to get funding from Y Combinator, a tech startup accelerator. Silimate is also a Verific customer.

Ann was formerly a hardware designer at Apple, a departure from the traditional EDA developer profile. Like Ann, other founders of many of the new breed of EDA startups were formerly designers from Arm, NVIDIA, SpaceX, Stanford and Synopsys.

While doing a startup was always part of her game plan, Ann’s motivation for becoming an entrepreneur came from frustrations within the chip design flow and availability of new technology to solve some of these pressing issues.

AI, Ann acknowledged, may provide a solution to some of the problems she encountered and the reason behind the excitement and appetite about AI for EDA applications. “Traditional EDA solutions solve isolated problems through heuristic algorithms. There’s a high volume of gray area in between these well-defined boxes of inputs and outputs that had previously been unsolvable. Now with AI, there is finally a way to sift through and glean patterns, insights and actions from these gray areas.”

We turn to the benefits of EDA using AI technology. “Having been in the industry as long as I have,” says Rick. “I know the challenges are daunting, especially when you consider that our customers want to avoid as much risk as possible. They want to improve the speed to get chips out, but they are all about de-risking everything.”

I ask Ann if adding AI is only a productivity gain. “Productivity as a keyword is not compelling.” It’s an indirect measure of the true ROI, she notes, and adds it’s ultimately reducing the time to tape out while achieving the target feature set that engineering directors and managers look for.”

“What we are doing has been time-tested,” answered Rick when asked why these startups are going to Verific. “We recently had a random phone call from a researcher at IBM. He already knew that IBM was using Verific in chip design. He said, “I know that we need to deal with language and Verific is the gold standard.’

“We’re lucky we’ve just been around long enough. Nobody else in their right mind would want to do what we’ve done because it’s painstaking. I wouldn’t say boring, but it’s not as much fun as what Ann is doing, that’s for sure.”

As we move on to talk about funding and opportunities, Rick jumps in. “When people look at an industry, they want to know the leaders and immediately jump to the discussion of revenue and maturity. EDA is a mature industry and a three- or four-horse race. I think there are more horses at the starting line today that have the potential to make a dramatic impact.

“We’ve got an incredible amount of funds we can throw at this, assuming that we can achieve what we want to achieve. This is not something that just came along. This is a seismic shift in the commitment to use all the talent, tools, technology and money to make this happen.

“To me, it’s not a three-horse race—maybe it’s a 10-horse race. We really won’t know until we look back in another six months or a year from now at what that translates to. I am betting on it because the people doing this for the most part are not professional CAD developers. They looked at the problem and think they can make a dent.”

DAC Registration is Open

Notes:

Verific will exhibit at the 62 Design Automation Conference (DAC) in Booth #1316 at the Moscone Center in San Francisco from June 23–25.

Silimate’s Akash Levy, Founder and CTO, will participate in a panel titled “AI-Enabled EDA for Chip Design” at 10:30am PT Tuesday, June 24, during DAC.

Also Read:

Breker Verification Systems at the 2025 Design Automation Conference

The SemiWiki 62nd DAC Preview


Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs

Video EP8: How Defacto Technologies Helps Customers Build Complex SoC Designs
by Daniel Nenni on 06-06-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Chouki Aktouf, CEO and Founder of Defacto Technologies. Dan explores the challenges of building complex SoCs with Chouki, who describes challenges around managing complexity at the front end of the process while staying within PPA requirements and still delivering a quality design as fast and cost effectively as possible.

Chouki describes how Defacto’s SoC Compiler addresses the challenges discussed along with other important items such as design reuse. He provides details about how Defacto is helping customers of all sizes to optimize the front end of the design process quickly and efficiently so the resulting chip meets all requirements.

Contact Defacto

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey
by Daniel Nenni on 06-06-2025 at 6:00 am

Dan is joined by Graeme Hickey, vice president of engineering at PQShield. Graeme has over 25 years of experience in the semiconductor industry creating cryptographic IP and security subsystems for secure products. Formerly of NXP Semiconductor, he was senior manager of the company’s Secure Hardware Subsystems group responsible for developing security and cryptographic solutions for an expansive range of business lines.

Dan explores the changes that are ahead to address post-quantum security with Graeme, who explains what these changes mean for chip designers over the next five to ten years, Graeme explains that time is of the essence, and chip designers should start implementing current standards now to be ready for the requirements in 2030. This process will continue over the next five to ten years.

Graeme describes the ways PQShield is helping chip designers prepare for the post-quantum era now. One example he cites is the PQPlatform-TrustSys, a complete PQC-focused security system that provides architects with the tools needed for the quantum age and beyond. Graeme also discusses the impact of the PQShield NIST-ready test chip. Graeme describes what chip designers should expect across the supply chain as we enter the post-quantum era.

Contact PQShield

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


ChipAgent AI at the 2025 Design Automation Conference #62DAC

ChipAgent AI at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-05-2025 at 10:00 am

ChipAgentsAtDAC

The semiconductor world is gathering at DAC 62, and ChipAgents AI is coming ready to show why agentic AI is the missing piece in modern RTL design and verification. Whether you’re drowning in terabytes of waveform data, grinding toward 100% functional coverage, or hunting for ways to accelerate time-to-market, our sessions and live demos will give you a first-hand look at how autonomous AI agents can transform your flow.

ChipAgents AI @ DAC 62: Where Agentic AI Meets Next-Gen Verification

June 23–25, 2025 • Moscone West, San Francisco

ChipAgents Sessions
Day & Time Venue Title What You’ll Learn
Mon 6/23 10:30 a.m. Exhibitor Forum (Level 1) Taming the Waveform Tsunami: Agentic AI for Smarter Debugging See Waveform Agents trace failure propagation across modules and time in seconds—no manual spelunking required. Real case studies show days-long debug cycles cut to minutes.
Tue 6/24 1:45 p.m. Exhibitor Forum (Level 1) CoverAgent: How Agentic AI Is Redefining Functional Coverage Closure Watch CoverAgent analyze coverage reports, infer unreachable bins, and auto-generate targeted stimuli—driving up to 80 % faster closure in complex SoCs.
Wed 6/25 11:15 a.m. DAC Pavilion (Level 2) Beyond Automation: How Agentic AI Is Reinventing Chip Design & Verification CEO Prof. William Wang reveals how multi-agent workflows tackle constraint solving, automated debug, proactive design optimization, and more.

Tip: All three talks are designed for live Q&A—bring your toughest verification pain points.

Live Demo & 1-on-1s

Exhibition Booth #1308, Level 1 10 a.m.–6 p.m. daily

  • Waveform Agents: Natural-language root-cause analysis on multi-TB VCD/FST dumps
  • CoverAgent: Autonomous coverage gap hunting & stimulus generation
  • ChipAgents CLI & VS Code Extension: Plug-in AI agents for Verilog, SystemVerilog, UVM

Come with your own specs, traces or coverage reports and we’ll run them live.

Why Agentic AI Now?

  • Scale: LLM-powered agents reason across RTL, waveforms, testbenches, logs, and documentation simultaneously.
  • Speed: Hypothesis-driven search slashes debug and closure cycles by orders of magnitude.
  • Explainability: Results are surfaced as step-by-step causal chains, so engineers stay in control.
  • Complementary: Works alongside existing simulators, formal tools, and waveform viewers—no rip-and-replace.

Meet the Team

  • William Wang – Founder & CEO, UCSB AI faculty
  • Zackary Glazewski – Forward-Deployed Engineering Lead
  • Mehir Arora – AI Research Engineer, Functional Coverage Specialist

They’ll be joined by the engineering crew behind our SOC-scale deployments and early-access customers.

Book a Private Briefing or Join Our Private Party

Slots fill fast during DAC week. To reserve a 30-minute roadmap briefing—or to request an invitation to our private rooftop dinner for semiconductor executives and leading engineers—visit chipagents.ai or stop by Booth #1308.

See You in San Francisco! DAC Registration is Open

If your verification team is buried under data, waveforms, coverage debt, or deadline pressure, ChipAgents AI has something you’ll want to witness live. Mark your calendar for June 23–25, swing by Booth #1308, and discover how agentic AI is turning RTL understanding from an art into a science.

About us

We are reinventing semiconductor design and verification through advanced AI agent techniques. ChipAgents AI is pioneering an AI-native approach to Electronic Design Automation (EDA), transforming how chips are designed and verified. Our flagship product, ChipAgents, aims to boost RTL design and verification productivity by 10x, driving innovation across industries with smarter, more efficient chip design.

Also Read:

AlphaDesign AI Experts Wade into Design and Verification

CEO Interview with Dr. William Wang of Alpha Design AI


proteanTecs at the 2025 Design Automation Conference #62DAC

proteanTecs at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-05-2025 at 8:00 am

62nd DAC SemiWiki

Discover how proteanTecs is transforming health and performance monitoring across the semiconductor lifecycle to meet the growing demands of AI and Next-Gen SoCs.

Stop by DAC booth #1616 to experience our latest technologies in action, including interactive live demos and explore our full suite of solutions — designed to boost reliability, optimize power, and enhance product quality for next-gen AI and data-driven applications.

Don’t miss our daily in-booth theater sessions, featuring expert talks from industry leaders in ASIC design, IP, EDA, cloud infrastructure, including: Arm, Andes, Samsung, Advantest, Alchip, Siemens, PDF Solutions, Teradyne, Cadence, GUC, and more!  Plus, hear insights from proteanTecs’ own experts.

Interested in a deeper dive? We’re now booking private meeting room sessions tailored to your company’s needs. Learn how our cutting-edge, machine learning-powered in-system monitoring delivers unprecedented visibility into device behavior — from design to field.

During the show, we will be presenting multiple solutions, including:
  1. Power and Performance
  2. Reliability, Availability, Serviceability
  3. Functional Safety & Diagnostics
  4. Chip Production
  5. System Production
  6. Advanced Packaging

Meet us at Booth #1616

See the full booth agenda, HERE.

Book a meeting with proteanTecs at DAC 2025

proteanTecs is the leading provider of deep data analytics for advanced electronics monitoring. Trusted by global leaders in the datacenter, automotive, communications and mobile markets, the company provides system health and performance monitoring, from production to the field.  By applying machine learning to novel data created by on-chip monitors, the company’s deep data analytics solutions deliver unparalleled visibility and actionable insights—leading to new levels of quality and reliability. Founded in 2017 and backed by world-leading investors, the company is headquartered in Israel and has offices in the United States, India and Taiwan.

DAC registration is open.

Also Read:

Cut Defects, Not Yield: Outlier Detection with ML Precision

2025 Outlook with Uzi Baruch of proteanTecs

Datacenter Chipmaker Achieves Power Reduction With proteanTecs AVS Pro

Also Read:

Podcast EP279: Guy Gozlan on how proteanTecs is Revolutionizing Real-Time ML Testing

Cut Defects, Not Yield: Outlier Detection with ML Precision

2025 Outlook with Uzi Baruch of proteanTecs