Bronco Webinar 800x100 1

NanoIC Extends Its PDK Portfolio with First A14 Logic and eDRAM Memory PDK

NanoIC Extends Its PDK Portfolio with First A14 Logic and eDRAM Memory PDK
by Daniel Nenni on 02-04-2026 at 6:00 am

Nano IC TSMC Process Roadmap

NanoIC has announced a major expansion of its process design kit portfolio with the introduction of its first A14 logic and embedded eDRAM  memory PDK. This milestone reflects the company’s growing role in enabling advanced semiconductor design at cutting-edge technology nodes and addresses increasing industry demand for highly integrated, power-efficient system-on-chip (SoC) solutions.

As semiconductor processes continue to scale, the availability of robust and well-validated PDKs has become a critical success factor for chip designers. A PDK serves as the essential interface between a foundry’s manufacturing process and EDA tools, providing accurate models, design rules, device libraries, and verification decks. By extending its portfolio to include A14-class technology, NanoIC is positioning itself to support next-generation designs for applications such as AI, HPC, mobile processors, and advanced networking.

The newly released A14 logic PDK is designed to address the challenges associated with extreme scaling, including tighter design rules, increased variability, and complex power-performance trade-offs. NanoIC’s solution offers comprehensive transistor models, standard cell support, and reliability data that allow designers to confidently optimize performance, power consumption, and silicon area. This is especially important at advanced nodes, where even small inaccuracies in modeling can lead to costly redesigns or yield issues.

What sets this announcement apart is the inclusion of an eDRAM memory PDK alongside the logic offering. Embedded DRAM has re-emerged as an attractive memory option for advanced SoCs due to its higher density compared to SRAM and lower latency compared to off-chip DRAM. Integrating eDRAM directly on logic chips enables designers to build memory-rich architectures that improve bandwidth and energy efficiency, key requirements for data-intensive workloads such as AI inference and edge computing.

NanoIC’s A14 eDRAM PDK provides designers with the tools needed to seamlessly integrate memory blocks into complex SoC designs. The PDK includes memory cell libraries, timing and power models, and process-aware design rules that ensure manufacturability and reliability. By aligning the eDRAM PDK closely with the A14 logic process, NanoIC enables tighter co-optimization between logic and memory, reducing design complexity and accelerating time-to-market.

Another important aspect of the new PDKs is their compatibility with leading EDA platforms. NanoIC has emphasized interoperability and early design enablement, allowing customers to begin architectural exploration and IP development well before volume manufacturing. This early access is increasingly valuable as design cycles lengthen and the cost of advanced-node development continues to rise.

From a broader industry perspective, NanoIC’s move highlights a growing trend toward specialized and differentiated PDK offerings. As advanced nodes become more complex, chipmakers are seeking partners that can provide deep process expertise and tailored design enablement rather than one-size-fits-all solutions. By delivering both logic and eDRAM PDKs at the A14 level, NanoIC demonstrates its ability to support heterogeneous integration and memory-centric architectures that define modern semiconductor innovation.

Bottom line: NanoIC’s extension of its PDK portfolio with its first A14 logic and eDRAM memory PDK represents a significant step forward for the company and its customers. The new offerings address the technical demands of advanced semiconductor design while enabling higher performance, greater integration, and improved power efficiency. As the industry continues to push the limits of scaling and system complexity, comprehensive PDK solutions like NanoIC’s will play a crucial role in turning ambitious chip concepts into manufacturable reality.

Also Read:

TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation

The Chronicle of TSMC CoWoS

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth


2026 Outlook with Coby Hanoch of Weebit Nano

2026 Outlook with Coby Hanoch of Weebit Nano
by Daniel Nenni on 02-03-2026 at 10:00 am

Coby Hanoch CEO Weebit embedded ReRAM RRAM NVM IP module Flash

Coby Hanoch is the CEO of Weebit Nano. Coby has nearly 45 years’ of experience in the semiconductor and related industries, including engineering, engineering management, sales, and executive roles. Coby was previously CEO at PacketLight Networks, and held VP Worldwide Sales roles at both Verisity and Jasper Design Automation.

Tell us a little bit about yourself and your company.

Weebit Nano is a provider of advanced non-volatile memory (NVM) IP. We develop and license our ReRAM technology to foundries, IDMs and product companies. I joined Weebit eight years ago, when we were effectively a startup with two engineers. Today, we are a production-ready ReRAM provider with over 60 employees, about a third of which are PhDs. We also have license agreements with onsemi, Texas Instruments, DB HiTek, and SkyWater, as well as several agreements with product companies.

What was the most exciting high point of 2025 for your company?

For me, the standout high point was the momentum behind commercial validation and our move into a genuinely production-ready position. The onsemi and TI agreements are good examples of that, because they combine manufacturing and product licensing in the relationships, which is rare and powerful.

On the technology side, it was also exciting to see our readiness for demanding use cases: we’ve qualified our ReRAM to AEC-Q100 at 150°C and demonstrated more than 100,000 endurance cycles, and we’ve proven the technology in silicon across 130nm, 65nm, 28nm and 22nm, and successfully simulated it on FinFET nodes as well.

What was the biggest challenge your company faced in 2025?

One of the biggest challenges is that the industry doesn’t switch memory technologies overnight. Embedded flash is still widely used because it’s familiar and deeply embedded into design flows, even though scaling and integration constraints are getting harder to ignore.

Another practical challenge is simply execution at scale: supporting multiple customers and multiple projects means building the processes, teams, and infrastructure to deliver consistently across foundry integration, product modules, and ongoing engineering support.

How is your company’s work addressing this challenge?

We address the natural adoption hesitance by proving we are ready for real production: licensing to credible partners, demonstrated silicon, and qualification work that matters to customers. We also help customers see the significant advantages ReRAM has in terms of low manufacturing cost, faster access time and lower power consumption.

We’re also strengthening our operating model around customer delivery: expanding our global sales and support footprint, streamlining and automating procedures including building a Customer Success team with deep fab experience so we can effectively run multiple customer programs.

What do you think the biggest growth area for 2026 will be, and why?

I expect many foundries and IDMs, with whom we have been talking for some time, to step forward and engage with us in licensing agreements. The demand for ReRAM is growing every day, and key players have already engaged with Weebit, so the perceived risk level is dropping significantly.

One of the biggest growth areas will be edge AI moving toward more integrated, monolithic designs where on-chip NVM becomes a real differentiator for cost, power, and security.

I also see strong growth in smarter automotive MCUs driven by electrification and ADAS, where reliability and advanced-node integration are increasingly important.

In reality, there are other compelling segments too, as almost every application needs embedded NVM, including more integrated analog and mixed-signal ICs (like smart PMICs), and high-reliability environments such as aerospace and LEO satellites.

How is your company’s work addressing this growth?

For edge AI, ReRAM is a strong fit because it enables keeping weights on-chip, which can reduce latency and power consumption, and improve security. ReRAM bits are also smaller than SRAM bits, supporting higher on-chip memory density and higher accuracy.

For automotive and harsh environments, we’ve focused on the reliability that customers need, including qualifying for AEC-Q100 at 150°C and endurance beyond 100,000 cycles. We are already demonstrating operation at higher temperatures.

For analog and mixed-signal designs, our ReRAM is back-end-of-line (BEOL), which makes it easier to integrate without compromising analog blocks, making these designs more efficient at lower manufacturing cost compared to flash.

Towards supporting growth in these markets and others, our focus in 2025 was setting up the infrastructure to support many big customers in parallel. As more and more companies move towards ReRAM as their embedded NVM of choice, our new Customer Success team will ensure the success of all the projects we engage in.

What conferences did you attend in 2025 and how was the traffic?

In general, we prioritize conferences where we can schedule high-quality meetings with foundries, IDMs, and SoC teams focused on embedded NVM, edge AI, and automotive. When the audience is concentrated, the traffic and meeting quality are strong. In 2025 we participated at Embedded World in Germany, CES in the USA, and numerous local shows around the globe. Our technologists also presented at conferences like CEA-LIST Tech Days, MPSoC’25, The Future of Memory and Storage (FMS) and the VLSI Symposium.

Will you participate in conferences in 2026? Same or more as 2026?

We expect to participate at a similar level or slightly more than 2026, focusing on events that concentrate on our target customers and convert into real project engagement.

How do customers normally engage with your company?

Customers typically engage with us in two ways. Foundries and IDMs license our technology for process integration and qualification, with license fees, NRE and support. Product companies license ReRAM modules for SoCs, again with license fees, possible NRE, and royalties once products go into production.

We also work with product companies on tailored ReRAM modules optimized to their chip, because foundries may not want to customize memory modules for each product. Companies can reach us through our website at www.weebit-nano.com.

Are you incorporating AI into your products?

We see AI as a major driver for ReRAM adoption, as ReRAM is well suited to edge AI and neuromorphic approaches. ReRAM offers significantly higher density than SRAM, enabling a larger number of model coefficients to be stored directly on chip, which in turn improves inference accuracy within the same silicon footprint. We have already validated AI inference using ReRAM in working silicon and are seeing increasing momentum around near-memory and in-memory computing approaches. And because each ReRAM cell can function analogously to a synapse, the technology aligns naturally with neuromorphic computing architectures.

To be clear, we’re primarily an embedded memory IP and licensing company, so the AI aspect is mostly about enabling our customers’ AI-capable chips.

Is AI affecting the way you develop your products?

We believe there are many ways in which AI can make us more efficient, as well as enable better analysis of the data we accumulate. To this end, we have engaged with a leading university AI professor who is reviewing our R&D processes and procedures and recommending how AI can help improve them.

Additional comments?

I believe we’re at a turning point for embedded non-volatile memory. ReRAM is increasingly validated in real silicon, and in the commitments made by key players. It scales where embedded flash does not, and it integrates in a way that aligns with where SoCs are heading. I expect 2026 to be the year where many companies, both fabs and product companies, make the move towards using ReRAM as their embedded NVM of choice.

CONTACT WEEBIT NANO

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


How Switzerland Built a Global Semiconductor Edge by Thinking Smaller

How Switzerland Built a Global Semiconductor Edge by Thinking Smaller
by Admin on 02-03-2026 at 8:00 am

Alain Serge Porret Headshot

By Alain-Serge Porret, Vice President, Integrated & Wireless Systems, CSEM

Since ramping up several years ago, the global semiconductor and artificial intelligence (AI) race has been driven by scale, from building larger data centers, developing bigger and more powerful models, and with them, increasingly complex and power-hungry chips. The race for scale saw global investments in AI reach $252.3 billion in 2024 alone, according to Stanford University; that would represent a 50% increase from the previous year. Within this ecosystem, the prevailing assumption in many instances has been that competitiveness is a function of attracting the largest investments and creating the most computing power as fast as possible.

This logic has fueled significant innovations in recent years, but it has also created an environment in which only a handful of nations and organizations have the resources needed to join the race and participate at the highest levels.

With a population of less than nine million, Switzerland is a prime example of a country aiming to carve its own lane in this tight market. Not a member of the EU and not benefiting from the vast stretches of land and natural resources found in powerhouses like the United States and China that can fuel data centers and large fabrication plants, researchers have instead taken a unique approach to securing a seat at the table. Instead of trying to outmuscle the bigger names, the Swiss have focused on outperforming in areas of efficiency, specialization and precision, with a focus on ultra-low-power semiconductor design.

This route is beginning to shift from effective to essential as the world begins to grapple with supplying the resources, from rare earths to energy, that are required to support today’s dominant theories around semiconductor design.

Chips and AI Heading Towards the Energy Wall

Over the past few years large AI models have grown considerably, and with that growth has come the need for more complex chips, bigger data centers, and subsequently more resources. Recent reports have predicted that in the United States alone, data centers could consume upwards of 68 billion gallons of water a year by 2028, with an estimated three percent of all electricity consumption around the world being tied to AI demands by just 2030.

The scale-centric trajectory of the industry is colliding with physical and economic limits, with power grids already stretched to their limits. Even leading companies that once championed aggressive scale are now looking how to properly size chips for the energy realities of today by incorporating efficiency measures, model compression, hardware specialization, and on-device intelligence to reduce costs and carbon footprint.

But in situations where you cannot scale up, you can always scale inwards, refining each part of the process, reducing unnecessary computation and optimizing to be more energy efficient. There are few nations doing this better than Switzerland.

By optimizing inward, chips are able to perform more complex tasks, such as in cases of face anonymization, driver monitoring, medical inference, condition monitoring, and much more, while only using a fraction of the energy that typical cloud-based AI pipelines require. Power demands are rising, and costs are rising with them. As they do, this level of optimization becomes central to the future of AI deployment.

Specialization Beats Scale When the Job Demands It

Switzerland’s approach is built on a growing recognition that general-purpose AI models are not always the most effective. The world’s largest AI models are extraordinary tools that are changing the way we work and live seemingly by the day. However, their breadth of function can come with tradeoffs, even beyond their high energy requirements. Oftentimes when we focus on becoming a jack of all trades, we naturally wind up being a master of none.

By contrast, Swiss research organizations and innovation centers, such as CSEM, have concentrated on tailored systems designed to excel at highly specialized functions. For example, work on custom Application-Specific Integrated Circuits (ASICs) has shown instances where specialized circuits can match, or even exceed, the performance of general-purpose processors, while only using a fraction of the energy. In other cases, domain-specific AI models, trained to recognize patterns in constrained environments, often outperform larger models when applied to targeted use cases.

While this focus limits potential applications, it homes in on core critical functions to perform specific tasks locally, reliably, securely and quickly. By understanding the constraints of one problem, engineers can develop solutions that utilize only enough computing power and energy needed to accomplish that one problem. Take, for instance, privacy-preserving AI systems that forget personal biometric data immediately after input, or driver-monitoring systems that need to run continuously, without affecting a vehicle’s battery performance. These challenges require intelligence that is small, local, and efficient, rather than a model that is attempting to accomplish everything at once, often within the cloud on a distant data center.

These specialized technologies serve as a complement, rather than a competition. This carves out a separate clean lane for nations and research institutions that do not benefit from billions of investment dollars or massive resources. Switzerland’s contribution to the global ecosystem is not to replace large-scale AI, but to supply the high-efficiency components that empower those with specific functions that need to be met in a highly-sustainably and precise manner.

Precision Engineering as a National Advantage

Switzerland’s success in this niche is not accidental or coincidental. It flows from a national ecosystem shaped by decades, if not centuries of precision, and an educational system that aims to perpetuate those skills for the digital age. From deep roots in watchmaking, to more modern advanced manufacturing, biomedical instrumentation and micro-electronics research, Switzerland has garnered a strong reputation for building devices built to perform highly specialized tasks flawlessly. These areas have aligned naturally with the needs of ultra-low-power and energy-efficient semiconductor design.

The country has also worked carefully to create a collaborative environment that emphasizes speed and agility, empowering them to quickly prototype and test specialized chips. Researchers and engineers work closely with industry partners, allowing concepts to move from lab to deployment quickly while maintaining high standards for performance and reliability. The emphasis on interdisciplinary interactions, which combines education, manufacturing, technology and research, enables a focused approach throughout the process.

Switzerland’s political neutrality and stable research funding environment also allow long-term projects to thrive. Rather than chasing short-term market cycles, institutions can invest in technologies with value creation horizons measured in years, rather than months. Engineers working on chip development in the country embody the national ethos by identifying strategic niches where precision, reliability, and efficiency matter more than size, and then excelling within those boundaries.

Looking Ahead: Why This Model Matters for the Future of AI and Chips

As the global AI and semiconductor ecosystems evolve at a breakneck pace, Switzerland’s approach is creating a blueprint for countries and regions looking for viable ways to contribute to the global chips race without matching the massive scale. The future will not be defined solely by the largest models or the biggest data centers. Instead, it will be important to keep in mind that:

  • Efficiency is a competitive advantage. As energy becomes a limiting factor, systems that deliver strong performance at low power will hold increasing value across sectors.
  • Specialization can outperform scale. Domain-specific intelligence and custom hardware will continue to offer superior performance for real-time, safety-critical, and privacy-sensitive applications.
  • Niche excellence strengthens the global ecosystem. Small, highly optimized models and chips can integrate with and enhance larger AI systems, enabling better performance at the system level than any single approach could achieve alone.

As the AI and semiconductor industries look to the next decade, those who can focus on more precise and sustainable approaches are ideally positioned to not only get a piece of the pie but also create a far stronger and adaptable AI and chips environment for the entire globe.

Also Read:

Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle

Agentic at the Edge in Automotive and Industry

Arteris Smart NoC Automation: Accelerating AI-Ready SoC Design in the Era of Chiplets

 


How 25G Ethernet, PCIe 5.0, and Multi-Protocol PHYs Enable Scalable Edge Intelligence

How 25G Ethernet, PCIe 5.0, and Multi-Protocol PHYs Enable Scalable Edge Intelligence
by Kalar Rajendiran on 02-03-2026 at 6:00 am

Ethernet Links Enabling In Vehicle Network and ADAS

Physical AI is changing how intelligent systems interact with the real world. These systems must sense, process, and respond to data in real time. Unlike cloud AI, Physical AI depends on fast local processing and reliable distributed communication. This shift creates a new challenge. Systems must move large volumes of sensor and control data quickly and predictably. Adaptive dataflow architectures address this challenge. They coordinate data movement across networks, compute platforms, and physical interfaces.

Three technologies play central roles in enabling these architectures: 25G Ethernet, PCIe 5.0, and multi-protocol PHYs. Together, they create a balanced connectivity foundation for Physical AI, Edge AI, 5G infrastructure, and Industry 4.0 deployments.

25G Ethernet: The Foundation for Distributed AI Data Movement

Physical AI systems generate massive amounts of data. Cameras, LiDAR, radar, machine vision systems, and industrial sensors continuously produce high-bandwidth streams. This data must move reliably between distributed compute nodes.

25G Ethernet provides the transport fabric that enables this communication. It delivers high throughput while maintaining low and predictable latency. These characteristics are critical for real-time decision systems.

In autonomous vehicles, 25G Ethernet supports deterministic communication between sensors, domain controllers, and centralized compute units. In smart factories, it connects machine vision systems, robotics, and control platforms. In 5G networks, it supports fronthaul and midhaul data transport between radio units and baseband processing systems.

Another advantage of 25G Ethernet is scalability. It serves as a building block for higher-speed networking while maintaining strong power efficiency. This makes it well suited for distributed edge platforms that must balance performance and energy consumption.

While 25G Ethernet enables system-to-system data movement, internal compute platforms require equally efficient connectivity to process incoming data streams. This is where PCIe 5.0 plays a critical role.

PCIe 5.0: High-Performance Internal Connectivity for Edge Compute

PCIe 5.0 provides the internal data movement backbone within AI processing nodes. It connects CPUs, GPUs, AI accelerators, storage devices, and networking interfaces. These components must exchange data quickly to maintain real-time processing performance. Operating at 32 GT/s per lane, PCIe 5.0 doubles the bandwidth of PCIe 4.0. A full x16 configuration can deliver up to 128 GB/s of bidirectional throughput. This bandwidth supports demanding workloads such as sensor fusion, high-resolution video analytics, and real-time inference.

Although PCIe 6.0 exists, PCIe 5.0 remains highly relevant for edge deployments. It provides sufficient bandwidth for most inference and sensor processing workloads. At the same time, it avoids the higher power consumption and design complexity associated with newer signaling technologies.

Power efficiency is especially important for edge devices. PCIe 5.0 includes advanced power states that reduce energy consumption during idle or low-activity periods. Dynamic power gating helps minimize thermal load while maintaining system responsiveness. These features support automotive, industrial, and embedded AI platforms that operate under strict power constraints.

PCIe 5.0 also benefits from ecosystem maturity. Controllers, accelerators, and storage devices based on PCIe 5.0 are widely available and production-proven. This maturity improves interoperability and reduces integration risk. Designers can also optimize lane counts and channel configurations to balance bandwidth, area, and power.

While PCIe 5.0 enables high-speed data movement inside compute platforms and 25G Ethernet enables distributed communication, both rely on advanced physical signaling technologies. Multi-protocol PHYs provide this essential foundation.

Multi-Protocol PHY Architectures: Enabling Flexible Connectivity Convergence

Multi-protocol PHYs operate at the physical layer of high-speed communication systems. They provide the signaling infrastructure that enables reliable data transmission across electrical and optical channels.

Modern edge platforms often require support for multiple communication standards. These may include PCIe, Ethernet, CXL, and sensor interfaces such as JESD204. Multi-protocol PHYs allow these standards to share common SerDes resources.

This convergence reduces hardware complexity and improves silicon efficiency. It also allows systems to dynamically allocate high-speed I/O resources based on workload requirements. As Physical AI workloads evolve, platforms can adapt without major hardware redesign.

Multi-protocol PHYs also improve reliability. Advanced equalization, forward error correction, and clock recovery technologies help maintain signal integrity in harsh environments. These capabilities are essential for automotive, industrial, and telecom deployments.

Coordinating Adaptive Dataflow Across Distributed AI Systems

Adaptive dataflow architectures require synchronization across multiple connectivity layers. 25G Ethernet moves data between distributed systems. PCIe 5.0 enables high-speed communication within compute nodes. Multi-protocol PHYs ensure reliable signal transport across both domains.

Together, these technologies allow AI pipelines to operate with predictable latency and scalable bandwidth. They also improve overall system power efficiency by reducing redundant hardware and enabling flexible resource allocation.

Industry Impact

In automotive platforms, these technologies support distributed sensing, centralized AI processing, and deterministic vehicle networking. In Industry 4.0 environments, they enable real-time robotics coordination, machine vision analytics, and predictive maintenance. In 5G infrastructure, they support distributed radio processing and AI-driven network optimization.

Across these industries, adaptive dataflow architectures improve system responsiveness, scalability, and operational reliability.

Synopsys Solutions for Adaptive Dataflow Architectures

Physical AI systems depend on fast and reliable data movement. Adaptive dataflow architectures enable these systems to coordinate sensing, processing, and control in real time.

Beyond raw performance, long-lifecycle applications also demand proven reliability, functional safety, and security under harsh operating conditions. Features such as ASIL readiness and robust verification processes are essential for meeting these requirements across automotive, industrial, and 5G domains. Designers also benefit from solutions that integrate seamlessly across MAC, PCS, and PHY layers, reducing complexity and ensuring interoperability.

Synopsys’ portfolio of IP solutions include Ethernet, PCIe, multi-protocol PHYs with silicon-proven reliability and future-proofed for evolving connectivity standards. This creates a practical and power-efficient connectivity foundation for scalable next-generation edge intelligence platforms.

25G Ethernet PHY IP Performance Across PCIe 5.0 and 25GBASE-KR Modes

For more details, visit Synopsys IP for Edge AI.

Also Read:

From Wooden Boards to White Gloves: How FPGA Prototyping and Emulation Became Two Worlds of Verification… and How the Convergence Is Unfolding

From SoC to System-in-Package: Transforming Automotive Compute with Multi-Die Integration

Podcast EP337: The Importance of Network Communications to Enable AI Workloads with Abhinav Kothiala

 

 


The 71st International Electron Devices Meeting (IEDM 2025)

The 71st International Electron Devices Meeting (IEDM 2025)
by Daniel Nenni on 02-03-2026 at 6:00 am

IEDM 2025 SemiWiki

It is hard to believe this conference is older than most all of the participants, including myself. The amount of history behind this conference is amazing. Back in 1955 the meeting began as the Electron Devices Meeting (EDM), organized by what later became the IEEE Electron Devices Society. Its core purpose was to bring together scientists and engineers who were trying to figure out how solid-state devices actually worked, at a time when the field was still young and rapidly evolving.

The first conference centered on:
  • Transistors, which had only been invented a few years earlier (1947)
  • Diodes and vacuum tubes, which were still widely used
  • Fundamental device physics, such as charge transport, junction behavior, and material properties
  • Manufacturing challenges, including reliability, yield, and reproducibility

Today, the 71st International Electron Devices Meeting (IEDM 2025) reaffirmed its position as the world’s premier forum for advances in semiconductor devices and technologies. Held in December 2025, the conference brought together researchers, engineers, and industry leaders under the theme “Shaping Tomorrow’s Semiconductor Technology,” highlighting both the depth and breadth of innovation driving the future of electronics.

IEDM 2025 featured a high-quality technical program organized into 41 sessions, reflecting record-breaking engagement from the global research community. A total of 923 papers were submitted, the highest number in the conference’s history, with 295 papers accepted, underscoring the event’s selectivity and technical rigor.

The program included three plenary talks, four focus sessions on emerging areas, and a rich mix of oral and poster presentations spanning logic, memory, power devices, sensors, optoelectronics, and emerging compute paradigms.

Attendance at IEDM 2025 demonstrated the conference’s strong international reach and cross-sector relevance. The event recorded 2,123 registered attendees, with the vast majority participating in person. Industry professionals accounted for 52% of attendees, followed by 39% from universities and 7% from government and research institutions, reinforcing IEDM’s role as a bridge between academic research and industrial application

Participants represented a broad range of countries, reflecting the global nature of semiconductor innovation.

A major highlight of IEDM 2025 was its four focus sessions, each targeting transformative technology areas. Topics included efficient AI solutions across architecture, circuits, devices, and 3D integration, advances in thin-film transistor technologies, beyond-Von-Neumann and quantum-inspired computing, and silicon photonics for energy-efficient AI computing

These sessions emphasized how scaling challenges are increasingly being addressed through system-level co-design, heterogeneous integration, and new device concepts rather than traditional transistor scaling alone.

IEDM 2025 also showcased a set of highlight technical papers that illustrated the cutting edge of device research. Notable breakthroughs included monolithic 3D CFET integration for future logic and SRAM, oxide-semiconductor channel transistors for high-density 3D DRAM, and monolithic 3D compute-in-memory architectures capable of delivering dramatic gains in energy efficiency

Additional highlights addressed GaN and silicon co-integration for power and RF electronics, sub-micron pixel image sensors, and transistor-to-package thermal simulation techniques aimed at improving reliability in advanced 3D integrated circuits.

Beyond technical sessions, IEDM 2025 placed strong emphasis on professional development and community engagement. The program included six tutorials, two short courses, a career-focused luncheon, and an evening panel discussion examining the evolution of field-effect transistors and the growing role of AI in semiconductor design

These events provided valuable opportunities for early-career researchers and seasoned professionals alike to gain perspective on both historical progress and future challenges.

An industry vendor exhibition featured leading semiconductor companies, equipment suppliers, and research organizations, further strengthening collaboration between academia and industry. On-demand access to conference content extended the reach of IEDM beyond the live event, enabling continued engagement with the material after the conference concluded

Bottom line: IEDM 2025 successfully captured the state of the art in electron devices while pointing clearly toward the future. Through record participation, groundbreaking technical contributions, and a strong emphasis on emerging compute and integration paradigms, the conference demonstrated how the semiconductor community is collectively shaping the next era of electronics innovation.

Next we will cover the key presentations so stay tuned.

Contact IEDM 

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Verification Futures with Bronco AI Agents for DV Debug

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering


Advances in ATPG from Synopsys

Advances in ATPG from Synopsys
by Daniel Payne on 02-02-2026 at 10:00 am

Synopsys TestMAX family

I first learned about ATPG – Automatic Test Program Generation in the 1980s at Silicon Compilers, then continued in the 90s at Viewlogic with the Sunrise tools, so it was illuminating to get an update from Synopsys on their ATPG technology by attending a webinar. Synopsys over the years has developed a family of test tools, shown below. Srikanth Venkat Raman, Product Management Director at Synopsys introduced how their ATPG has added features to become timing-aware, power-aware and use AI to minimize test cost.

Timing-Aware ATPG

Bruce Xue, Staff Engineer described how timing-aware ATPG is made possible through fault models that account for: Transition Delays, Slack-based, Path Delay, Hold Time.

Timing-aware fault models

Using PrimeTime for Static Timing Analysis (STS) there are two ways to deal with timing violations, using functional SDC in ATPG or using violation-based SDC in ATPG.

Four challenges arise when using SDC in an ATPG flow:

  • ATPG quality of results drop when using SDC
  • Multi-cycle Path SDC is treated as False Path
  • Violation SDC can’t match with the functional timing
  • SDC written from PrimeTime has unnecessary commands for ATPG

New features in ATPG now address these challenges:

  • Native SDC reading
  • Multi-cycle Path ATPG
  • Improved timing exception ATPG quality of results
  • Write optimized SDC from PrimeTime

Five test results were presented showing  improvements between 15% to 73% on total test cycle under the same test coverage using these SDC and MCP features. You get an improved SDC flow and QoR for better coverage and reliability.

Power-Aware ATPG
Khader Abdel-Hafez, Scientist, was up next on the topic of new power-aware ATPG features. The approach is to limit sequential cell switching to generate power-friendly ATPG patterns, use functional clock-gating switches, and during shift cycles to limit sequential cell switching. During ATPG test generation the power is estimated and the power results can be compared versus PriimePower, showing excellent correlation.

Tool users can set their power budget by limiting the number of scan cells and if some patterns exceed that budget, then those patterns are skipped. For power-aware shift support there’s a software-based approach to utilize adjacent fill to manager power during shift, or a hardware-based approach by turning off some chains during ATPG or using clock divider circuitry to limit switching during shift.

The PrimePower tool can also be used with the ATPG tool as shown in this flow diagram:

 

TestMAX ATPG has features to manager your chip power during both capture and shift operations, plus the integration with PrimePower improves power estimation during ATPG and improves QoR.

AI Technology

The final presenter was Theo Toulas, R&D Principal Engineer, and his topic was using AI technology called TSO.ai (Test Space Optimization) with TestMAX ATPG. TSO.ai aims to reduce your test times without changing tool flow  while using the same CPU resources, and producing deterministic  results.

Synopsys has AI embedded into TestMAX ATPG, so there’s no separate model maintenance and it learns from multiple designs run across an internal suite, so you just need to activate the feature in TestMAX ATPG. This AI technology analyzes your design from both simulation and design structure, applies learned strategies to optimize your ATPG parameters, then optimizes multiple ATPG heuristics using targeted solver efforts, producing a reduction in test cycles.

Feedback from customer designs show an average test cycle optimization of 15.81%, while increasing CPU runtime by 2.04X. Here’s a plot showing a dozen designs with test cycle optimization benefits:

Users just add a single command to the flow, turning on the ATPG learning feature, so there’s no learning curve involved.

Summary

Test engineers strive to meet fault coverage goals while staying within power budgets and trying to uncover any timing issues plus minimize test cycles. That’s a tall order to meet and manual methods are not sufficient for the task at hand. New ATPG features added by Synopsys are addressing these critical issues for test-aware and power-aware flows, while AI is helping optimize test cycles. Working smarter, not harder is what I saw in this webinar.

Watch the archived webinar after a brief registration online.

Related Blogs


TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation

TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation
by Daniel Nenni on 02-02-2026 at 8:00 am

TSMC AZ Day FAB 21

In February of 2026, Taiwan Semiconductor Manufacturing Company (TSMC) will host the TSMC AZ Exclusive Experience Day in Phoenix, Arizona, offering selected participants a rare opportunity to engage directly with one of the most advanced semiconductor manufacturing organizations in the world. The event will serve as an immersive introduction to TSMC’s Arizona operations, highlighting the company’s culture, technological leadership, and long-term commitment to building a robust U.S. semiconductor ecosystem.

Designed as more than a traditional recruitment event, the Exclusive Experience Day will provide attendees with an in-depth look at what it means to work at the forefront of advanced chip manufacturing. Participants will gain insight into TSMC’s operational philosophy, engineering rigor, and collaborative environment through curated presentations, interactive sessions, and direct engagement with company leaders and engineers. By opening its doors to a select audience, TSMC will aim to foster meaningful connections with future talent while communicating its expectations for excellence, discipline, and innovation.

The event will take place against the backdrop of TSMC’s rapidly expanding Arizona presence. As the company continues its multi-billion-dollar investment in advanced fabrication facilities in the state, Arizona is expected to become a cornerstone of U.S.-based semiconductor manufacturing. These fabs will play a critical role in supporting industries such as artificial intelligence, high-performance computing, automotive electronics, and advanced mobile devices. The Exclusive Experience Day will therefore not only introduce career opportunities but also contextualize how individual roles contribute to broader national and global technology goals.

Throughout the day, attendees will have opportunities to interact with TSMC engineers, technicians, managers, and human resources professionals. These interactions will allow participants to ask detailed questions about career paths, training programs, work expectations, and life inside a high-tech semiconductor fab. By facilitating candid conversations, TSMC will seek to demystify the realities of semiconductor manufacturing and help prospective employees assess alignment between their skills, aspirations, and the company’s mission.

A key feature of the experience will be guided exposure to fab operations and environments. Participants will learn about cleanroom protocols, advanced process technologies, and the precision required to manufacture chips at nanometer scales. For many attendees, this will be their first opportunity to understand the discipline and teamwork required to operate within one of the world’s most sophisticated manufacturing settings. This hands-on exposure will reinforce the idea that semiconductor manufacturing is both technically demanding and deeply collaborative.

Beyond technical learning, the 2026 TSMC AZ Exclusive Experience Day will emphasize culture and community. TSMC will present its core values, including long-term thinking, continuous improvement, and mutual trust, while also highlighting its investment in employee development and local engagement. As the company continues to integrate into the Arizona community, workforce development and talent cultivation will remain central to its strategy. Events like this will help build a shared sense of purpose between TSMC and the people who will support its operations for decades to come.

The timing of the event will align with ongoing hiring and workforce expansion efforts, making it especially relevant for students, early-career professionals, and experienced engineers seeking to participate in a once-in-a-generation manufacturing build-out. For attendees, the experience will offer clarity on expectations, opportunity, and impact, providing a realistic and inspiring view of what a career at TSMC Arizona could entail.

Bottom line:  the 2026 TSMC AZ Exclusive Experience Day will represent more than an introduction to jobs or facilities. It will serve as an invitation to join a transformative effort in advanced manufacturing, where individual talent and global technology leadership intersect. By bringing future employees inside its vision, TSMC will reaffirm that the success of the semiconductor industry depends not only on capital and equipment, but on the people who design, build, and sustain it.

CONTACT TSMC

Also Read:

The Chronicle of TSMC CoWoS

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


DAC – The Chips to Systems Conference 2026

DAC – The Chips to Systems Conference 2026
by Daniel Nenni on 02-02-2026 at 6:00 am

DAC 2026 Long Beach

The Design Automation Chips to Systems Conference is the preeminent international event for professionals involved in electronic design, system architecture, and EDA.  Formerly known simply as the Design Automation Conference or DAC has evolved over more than six decades into a forward-looking forum that spans the entire spectrum from silicon chips to complex systems hence its current tagline “Chips to Systems.”

I have attended DAC since the 1984 event in Albuquerque New Mexico right out of college. It is my favorite conference and the absolute best networking event for the EDA IP industry.  It is a deep technical and academic program with numerous networking events. I have done book signings, panels, and presentations at DAC for many years and hopefully many more. My beautiful wife and I will be there enjoying the sun and fun in Long Beach. First on our list is to tour the Queen Mary!

This year DAC will be held at the Long Beach Convention Center July 26 through July 29, 2026. This coastal venue marks a vibrant new chapter for the conference, bringing its signature blend of deep technical content, industry showcases, and professional networking to Southern California.

Scope and Mission

DAC serves a broad and diverse audience that includes system architects, chip designers, software engineers, validation specialists, and researchers from industry, academia, and government labs. Participants come from thousands of organizations worldwide to explore breakthroughs in design methods, automation tools, and emerging technologies.

The core mission of the conference is to advance innovation in how electronic systems are conceived, implemented, verified, and integrated. Covering everything from transistor-level circuit design to large-scale systems deployment and optimization, DAC stimulates cross-disciplinary dialogue and collaboration among specialists in hardware, software, and automation.

Technical Program and Tracks

The heart of DAC’s value lies in its technical program, which for 2026 will include a wide range of sessions, panels, and presentations addressing both fundamental research and practical engineering challenges. Key tracks include:

  • Research Track, highlighting original scientific and engineering breakthroughs.
  • Engineering and Practice Tracks, which focus on real-world design problems, tools, and workflows.
  • Workshops and Tutorials, offering more focused, hands-on learning and discussion opportunities.
  • Special Sessions and Panels, bringing varied perspectives on hot topics such as AI integration in hardware design and security.

Technical sessions selected by expert committees cover emerging topics ranging from AI-driven design automation to chiplets and exotic system architectures. There is also growing interest in fields like quantum computing hardware and cloud-native design environments.

Papers accepted for presentation are typically published in the conference proceedings and indexed in major digital libraries such as IEEE Xplore and the ACM Digital Library, providing a lasting academic impact.

Exhibition and Industry Engagement

Alongside the technical tracks, DAC features a large exhibition floor where leading companies in EDA, semiconductor IP, hardware tooling, and services demonstrate the latest technologies. Traditionally, around 150 exhibitors showcase solutions that span design automation, verification, architecture tools, and integrated hardware components.

The exhibition fosters direct interaction between vendors, users, and innovators, promoting knowledge transfer and potential partnerships. In addition to traditional booths, the event includes exhibitor forums and pavilion sessions where companies present deeper technical content right on the show floor.

Networking and Career Impact

DAC is also a vital networking venue. Attendees can connect through social events, informal meetups, and mentoring sessions, making it a critical space for early-career engineers and researchers to build relationships with established industry leaders. The diversity of participants—from startups to global corporations, and from graduate students to senior executives—creates a rich ecosystem for idea exchange.

In recent years, DAC has increasingly emphasized real-world impact and collaboration, encouraging submissions and participation that bridge academic research with industrial practice. This balance helps ensure that new methodologies and tools can move from concept to implementation faster.

Trends and Themes for 2026

Though specific session topics are continually finalized, some clear themes have emerged in the lead-up to 2026:

  • AI in EDA and System Design – exploring how machine learning, including agentic and generative models, is reshaping design flows.
  • Security and Trustworthy Systems, particularly as chips are embedded in critical infrastructure.
  • Chiplets and Advanced Integration, reflecting modular hardware approaches.
  • Cross-Domain Integration, such as hardware-software co-design and cloud-driven design methodologies.

Bottom line: DAC 2026 continues a long tradition of being at the forefront of electronic design innovation. It combines rigorous technical content, broad industry participation, and a global community of practitioners and researchers—all under the theme of “Chips to Systems.” Whether you are an engineer, researcher, or business leader, DAC offers a unique opportunity to learn, connect, and help shape the future of electronic systems.

Register for DAC.

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Verification Futures with Bronco AI Agents for DV Debug

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering


CEO Interview with Naama BAK of Understand Tech

CEO Interview with Naama BAK of Understand Tech
by Daniel Nenni on 02-01-2026 at 6:00 pm

Naama Bak Headshot

Naama BAK is an entrepreneur with 15 years of experience in tech. He is the founder of Understand Tech, a generative AI platform for enterprises, and Trustii.io, a machine learning platform for data science challenges. He previously held roles at NXP Semiconductors, Orange, and Safran, working in cybersecurity across research, development, product marketing, and business development.

Tell us about your company

Understand Tech is a French and American AI company focused on building scalable, secure, and private AI solutions for the semiconductor industry. Having spent more than a decade in the semiconductor industry, I am keenly aware of the challenges these companies face. As product complexity continues to increase, employees and customers struggle to understand product functionality. Understand Tech was founded to address this complexity. We have created AI tools that enable virtual SMEs which can answer support and engineering questions, generate test plans, and solve a host of other problems caused by the sheer complexity of semiconductor products. Simply stated, we turn technical documentation into usable knowledge, and knowledge into decisions.

What motivated you to start this company?

Not long ago, while reviewing a technical specification, I spent over an hour trying to answer a simple, but critical technical question. How is a Matter commissioner authenticated during commissioning? The information existed in the Matter smart home standards documentation, but it was fragmented, buried in dense documentation, and costly to extract.

That moment reflected a broader reality I’ve seen throughout my career in semiconductors and deep-tech; technical documents are foundational to how products are built, validated, and secured, yet they remain difficult to operationalize. Engineers and decision-makers spend too much time searching, interpreting, and cross-checking information that should be immediately actionable.

As generative AI matured, the opportunity became clear. We didn’t set out to build another chatbot. Instead, we created a system that understands complex technical content and delivers reliable, domain-specific answers, grounded in the source material enterprises already trust.

Together with a team of experienced engineers, we built Understand Tech to bridge the gap between advanced AI capabilities and the real, day-to-day needs of technical organizations: turning documentation into usable knowledge, and knowledge into decisions.

What problems are you solving?

Our AI tools simplify complexity without compromising security and privacy. They can be used to create virtual subject matter experts (SMEs) for pre-sales support, customer support, for assisting distributors in understanding which product to use for a given use case, or to generate test plans.

What keeps your customers up at night?

They are worried about maintaining data privacy and security as they adopt AI solutions. In many cases, AI solutions will be ingesting some of their most sensitive assets, so data protection and security are paramount.

Describe your experience in starting and growing Understand Tech

It has been an exciting and challenging two years. Customer reception of our solution has been great, which is always very rewarding. Going from NXP, a company of more than 30,000 employees, to what started as a two person company is a massive shift. There was literally no one besides myself and my cofounder to handle everything from product development and marketing to IT and finance. We now have an amazing group of people on our team, which makes it fun to go to work every day.

What application areas are your strongest?

One of the earliest use cases for our solution was implementing virtual SMEs for pre-sales and customer support. For example, a few companies we support are Synaptics and SEALSQ. Our products are in use on their homepage, as we are the AI engine behind their website chatbot. As an early use case, this is one of the most mature features of our product. We also have a very strong solution for test case generation for semiconductor products.

What does the competitive landscape look like and how do you differentiate?

While other companies are beginning to provide AI solutions, our main competition remains OpenAI and ChatGPT. Other solutions lack the ability to output long, structured documents. One example is test plans. Most importantly, they fail to provide enterprise grade security features our customers require.

What new features/technology are you working on?

We are just about to launch our new “AI in a Box” solution on February 12 at WAICF in Cannes, France. This solution brings all the power of our cloud platform to on-premise deployments. This allows companies to implement advanced AI entirely offline and fully inside their security perimeter. Our system is pre-configured and production-ready: simply plug it in, connect to your network, and start building custom AI solutions with no installation or configuration required. Enjoy the flexibility and performance of our cloud platform while maintaining complete control over security and deployment.

Your solution seems broadly applicable, why did you choose to target Semiconductor companies?

Our solution is quite flexible and can be used by anyone that requires privacy, security & customisation when implementing complex AI models. We are working with a few companies outside of the semiconductor industry, but we focus on Semiconductor companies for several reasons. First, that is where I spent most of my career. I know many of the companies in the semiconductor space and, more importantly, I understand the challenges these companies face. Our solution was designed to help people working with complex products, and semiconductor companies have some of the most complex products in the world.

How do customers normally engage with your company?

You can learn more about us from our website at understand.tech or our LinkedIn page here. We are happy to schedule a demo or answer any questions. Our solution can be deployed through our private cloud, through our customer’s own private cloud, or through an on-premise appliance.

CONTACT Understand.Tech

Also Read:

CEO Interview with Dr. Heinz Kaiser of Schott

CEO Interview with Moshe Tanach of NeuReality

2026 Outlook with Paul Neil of Mach42


CEO Interview with Echo Yang of CSCERAMIC

CEO Interview with Echo Yang of CSCERAMIC
by Daniel Nenni on 01-31-2026 at 4:00 pm

Echo yang


Echo Yang is the CEO of CSCERAMIC, a China-based manufacturer specializing in advanced ceramic materials and precision ceramic components for industrial and laboratory applications. With a background spanning international trade, manufacturing coordination, and engineering-driven supply chain development, Echo leads CSCERAMIC’s strategy in high-purity alumina ceramics, laboratory consumables, and custom-engineered ceramic solutions.

Under his leadership, CSCERAMIC has evolved from a traditional ceramic supplier into a technically focused manufacturer emphasizing material stability, dimensional control, and long-term performance in demanding operating environments. The company serves customers across materials testing, thermal processing, chemical systems, and high-temperature industrial equipment markets.

Tell us about CSCERAMIC.

CSCERAMIC is an advanced ceramics manufacturer focused on alumina-based ceramic materials and laboratory ceramic consumables. Our core products include high-purity alumina tubes, rods, crucibles, and custom ceramic components designed for thermal analysis, material testing, chemical processing, and high-temperature industrial systems.

Our approach is not centered on selling standard catalog items, but on understanding how ceramics behave under real operating conditions. Many of our customers face challenges related to thermal cycling, chemical exposure, and dimensional stability over long service periods. We position ourselves as a manufacturing partner that helps address those issues through material selection, process control, and precision machining.

What problems are you solving for your customers?

Many industrial and laboratory systems operate under conditions that push materials to their limits—high temperatures, aggressive chemical environments, and continuous operation cycles. Traditional materials often degrade gradually, leading to misalignment, contamination, or inconsistent performance.

We help customers reduce these risks by providing ceramic components that maintain structural integrity, thermal stability, and chemical resistance over time. In applications such as thermal analysis instruments, furnace systems, and chemical equipment, even small material changes can affect measurement accuracy or system reliability. Our goal is to eliminate material-related uncertainty so customers can focus on system performance rather than component replacement.

Where do you see your strongest application areas today?

Our strongest applications are in laboratory analysis and high-temperature industrial systems. This includes:

  • Thermal analysis consumables for DSC, TGA, and related instruments
  • High-purity alumina tubes and rods for furnaces and thermal processing equipment
  • Ceramic components for chemical and corrosive environments
  • Custom ceramic parts requiring tight dimensional tolerances

These applications share a common requirement: materials must behave predictably under heat and chemical exposure. That is where advanced ceramics, particularly alumina-based materials, offer clear advantages.

What keeps your customers up at night?

Reliability and consistency. Customers are concerned about gradual degradation rather than catastrophic failure. Issues such as micro-cracking, thermal distortion, or surface contamination can slowly compromise system accuracy or uptime.

Another concern is supply consistency. For many users, replacing ceramic components is not simply a matter of sourcing a part—it can require recalibration, validation, or downtime. Customers want confidence that the parts they receive today will behave the same way six months or two years from now.

How do you differentiate CSCERAMIC from other ceramic suppliers?

The main difference is our emphasis on engineering collaboration and process consistency. We spend significant time understanding how a ceramic component is used, not just how it is manufactured.

Rather than pushing standardized products, we focus on:

  • Stable raw material sourcing and purity control
  • Controlled sintering and machining processes
  • Repeatable dimensional and surface quality
  • Application-driven customization

This allows us to support customers who need ceramics to perform reliably over long operating cycles rather than simply meeting initial specifications.

What technology or capability improvements are you currently working on?

We are continuously improving our machining precision, surface finishing, and inspection methods for alumina ceramics. Small improvements in surface quality or dimensional control can significantly reduce wear, particle generation, or thermal stress in real applications.

We are also investing in better internal testing and validation workflows to better simulate customer operating conditions. This helps us identify potential failure modes earlier and improve component design before production scaling.

How do customers typically engage with CSCERAMIC?

Most engagements start with a technical discussion. Customers usually bring drawings, operating parameters, or performance challenges rather than just a part number. From there, we work together to refine material choice, tolerances, and design details.

We support both prototyping and long-term production, and we place a strong emphasis on communication throughout the process. Our website, https://www.csceramic.com, serves as an entry point for customers to understand our capabilities and initiate technical discussions.

Final thoughts?

As industrial systems become more precise and demanding, materials can no longer be treated as interchangeable commodities. Advanced ceramics play a quiet but essential role in system reliability, accuracy, and lifecycle performance.

Our focus at CSCERAMIC is to ensure that ceramic components support—not limit—the performance of the systems they are part of. That mindset will continue to guide how we develop our materials, processes, and customer partnerships.

CONTACT CSCERAMIC

Also Read:

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance

CEO Interview with Naama BAK of Understand Tech

CEO Interview with Dr. Heinz Kaiser of Schott