RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

How 25G Ethernet, PCIe 5.0, and Multi-Protocol PHYs Enable Scalable Edge Intelligence

How 25G Ethernet, PCIe 5.0, and Multi-Protocol PHYs Enable Scalable Edge Intelligence
by Kalar Rajendiran on 02-03-2026 at 6:00 am

Ethernet Links Enabling In Vehicle Network and ADAS

Physical AI is changing how intelligent systems interact with the real world. These systems must sense, process, and respond to data in real time. Unlike cloud AI, Physical AI depends on fast local processing and reliable distributed communication. This shift creates a new challenge. Systems must move large volumes of sensor and control data quickly and predictably. Adaptive dataflow architectures address this challenge. They coordinate data movement across networks, compute platforms, and physical interfaces.

Three technologies play central roles in enabling these architectures: 25G Ethernet, PCIe 5.0, and multi-protocol PHYs. Together, they create a balanced connectivity foundation for Physical AI, Edge AI, 5G infrastructure, and Industry 4.0 deployments.

25G Ethernet: The Foundation for Distributed AI Data Movement

Physical AI systems generate massive amounts of data. Cameras, LiDAR, radar, machine vision systems, and industrial sensors continuously produce high-bandwidth streams. This data must move reliably between distributed compute nodes.

25G Ethernet provides the transport fabric that enables this communication. It delivers high throughput while maintaining low and predictable latency. These characteristics are critical for real-time decision systems.

In autonomous vehicles, 25G Ethernet supports deterministic communication between sensors, domain controllers, and centralized compute units. In smart factories, it connects machine vision systems, robotics, and control platforms. In 5G networks, it supports fronthaul and midhaul data transport between radio units and baseband processing systems.

Another advantage of 25G Ethernet is scalability. It serves as a building block for higher-speed networking while maintaining strong power efficiency. This makes it well suited for distributed edge platforms that must balance performance and energy consumption.

While 25G Ethernet enables system-to-system data movement, internal compute platforms require equally efficient connectivity to process incoming data streams. This is where PCIe 5.0 plays a critical role.

PCIe 5.0: High-Performance Internal Connectivity for Edge Compute

PCIe 5.0 provides the internal data movement backbone within AI processing nodes. It connects CPUs, GPUs, AI accelerators, storage devices, and networking interfaces. These components must exchange data quickly to maintain real-time processing performance. Operating at 32 GT/s per lane, PCIe 5.0 doubles the bandwidth of PCIe 4.0. A full x16 configuration can deliver up to 128 GB/s of bidirectional throughput. This bandwidth supports demanding workloads such as sensor fusion, high-resolution video analytics, and real-time inference.

Although PCIe 6.0 exists, PCIe 5.0 remains highly relevant for edge deployments. It provides sufficient bandwidth for most inference and sensor processing workloads. At the same time, it avoids the higher power consumption and design complexity associated with newer signaling technologies.

Power efficiency is especially important for edge devices. PCIe 5.0 includes advanced power states that reduce energy consumption during idle or low-activity periods. Dynamic power gating helps minimize thermal load while maintaining system responsiveness. These features support automotive, industrial, and embedded AI platforms that operate under strict power constraints.

PCIe 5.0 also benefits from ecosystem maturity. Controllers, accelerators, and storage devices based on PCIe 5.0 are widely available and production-proven. This maturity improves interoperability and reduces integration risk. Designers can also optimize lane counts and channel configurations to balance bandwidth, area, and power.

While PCIe 5.0 enables high-speed data movement inside compute platforms and 25G Ethernet enables distributed communication, both rely on advanced physical signaling technologies. Multi-protocol PHYs provide this essential foundation.

Multi-Protocol PHY Architectures: Enabling Flexible Connectivity Convergence

Multi-protocol PHYs operate at the physical layer of high-speed communication systems. They provide the signaling infrastructure that enables reliable data transmission across electrical and optical channels.

Modern edge platforms often require support for multiple communication standards. These may include PCIe, Ethernet, CXL, and sensor interfaces such as JESD204. Multi-protocol PHYs allow these standards to share common SerDes resources.

This convergence reduces hardware complexity and improves silicon efficiency. It also allows systems to dynamically allocate high-speed I/O resources based on workload requirements. As Physical AI workloads evolve, platforms can adapt without major hardware redesign.

Multi-protocol PHYs also improve reliability. Advanced equalization, forward error correction, and clock recovery technologies help maintain signal integrity in harsh environments. These capabilities are essential for automotive, industrial, and telecom deployments.

Coordinating Adaptive Dataflow Across Distributed AI Systems

Adaptive dataflow architectures require synchronization across multiple connectivity layers. 25G Ethernet moves data between distributed systems. PCIe 5.0 enables high-speed communication within compute nodes. Multi-protocol PHYs ensure reliable signal transport across both domains.

Together, these technologies allow AI pipelines to operate with predictable latency and scalable bandwidth. They also improve overall system power efficiency by reducing redundant hardware and enabling flexible resource allocation.

Industry Impact

In automotive platforms, these technologies support distributed sensing, centralized AI processing, and deterministic vehicle networking. In Industry 4.0 environments, they enable real-time robotics coordination, machine vision analytics, and predictive maintenance. In 5G infrastructure, they support distributed radio processing and AI-driven network optimization.

Across these industries, adaptive dataflow architectures improve system responsiveness, scalability, and operational reliability.

Synopsys Solutions for Adaptive Dataflow Architectures

Physical AI systems depend on fast and reliable data movement. Adaptive dataflow architectures enable these systems to coordinate sensing, processing, and control in real time.

Beyond raw performance, long-lifecycle applications also demand proven reliability, functional safety, and security under harsh operating conditions. Features such as ASIL readiness and robust verification processes are essential for meeting these requirements across automotive, industrial, and 5G domains. Designers also benefit from solutions that integrate seamlessly across MAC, PCS, and PHY layers, reducing complexity and ensuring interoperability.

Synopsys’ portfolio of IP solutions include Ethernet, PCIe, multi-protocol PHYs with silicon-proven reliability and future-proofed for evolving connectivity standards. This creates a practical and power-efficient connectivity foundation for scalable next-generation edge intelligence platforms.

25G Ethernet PHY IP Performance Across PCIe 5.0 and 25GBASE-KR Modes

For more details, visit Synopsys IP for Edge AI.

 


The 71st International Electron Devices Meeting (IEDM 2025)

The 71st International Electron Devices Meeting (IEDM 2025)
by Daniel Nenni on 02-03-2026 at 6:00 am

IEDM 2025 SemiWiki

It is hard to believe this conference is older than most all of the participants, including myself. The amount of history behind this conference is amazing. Back in 1955 the meeting began as the Electron Devices Meeting (EDM), organized by what later became the IEEE Electron Devices Society. Its core purpose was to bring together scientists and engineers who were trying to figure out how solid-state devices actually worked, at a time when the field was still young and rapidly evolving.

The first conference centered on:
  • Transistors, which had only been invented a few years earlier (1947)
  • Diodes and vacuum tubes, which were still widely used
  • Fundamental device physics, such as charge transport, junction behavior, and material properties
  • Manufacturing challenges, including reliability, yield, and reproducibility

Today, the 71st International Electron Devices Meeting (IEDM 2025) reaffirmed its position as the world’s premier forum for advances in semiconductor devices and technologies. Held in December 2025, the conference brought together researchers, engineers, and industry leaders under the theme “Shaping Tomorrow’s Semiconductor Technology,” highlighting both the depth and breadth of innovation driving the future of electronics.

IEDM 2025 featured a high-quality technical program organized into 41 sessions, reflecting record-breaking engagement from the global research community. A total of 923 papers were submitted, the highest number in the conference’s history, with 295 papers accepted, underscoring the event’s selectivity and technical rigor.

The program included three plenary talks, four focus sessions on emerging areas, and a rich mix of oral and poster presentations spanning logic, memory, power devices, sensors, optoelectronics, and emerging compute paradigms.

Attendance at IEDM 2025 demonstrated the conference’s strong international reach and cross-sector relevance. The event recorded 2,123 registered attendees, with the vast majority participating in person. Industry professionals accounted for 52% of attendees, followed by 39% from universities and 7% from government and research institutions, reinforcing IEDM’s role as a bridge between academic research and industrial application

Participants represented a broad range of countries, reflecting the global nature of semiconductor innovation.

A major highlight of IEDM 2025 was its four focus sessions, each targeting transformative technology areas. Topics included efficient AI solutions across architecture, circuits, devices, and 3D integration, advances in thin-film transistor technologies, beyond-Von-Neumann and quantum-inspired computing, and silicon photonics for energy-efficient AI computing

These sessions emphasized how scaling challenges are increasingly being addressed through system-level co-design, heterogeneous integration, and new device concepts rather than traditional transistor scaling alone.

IEDM 2025 also showcased a set of highlight technical papers that illustrated the cutting edge of device research. Notable breakthroughs included monolithic 3D CFET integration for future logic and SRAM, oxide-semiconductor channel transistors for high-density 3D DRAM, and monolithic 3D compute-in-memory architectures capable of delivering dramatic gains in energy efficiency

Additional highlights addressed GaN and silicon co-integration for power and RF electronics, sub-micron pixel image sensors, and transistor-to-package thermal simulation techniques aimed at improving reliability in advanced 3D integrated circuits.

Beyond technical sessions, IEDM 2025 placed strong emphasis on professional development and community engagement. The program included six tutorials, two short courses, a career-focused luncheon, and an evening panel discussion examining the evolution of field-effect transistors and the growing role of AI in semiconductor design

These events provided valuable opportunities for early-career researchers and seasoned professionals alike to gain perspective on both historical progress and future challenges.

An industry vendor exhibition featured leading semiconductor companies, equipment suppliers, and research organizations, further strengthening collaboration between academia and industry. On-demand access to conference content extended the reach of IEDM beyond the live event, enabling continued engagement with the material after the conference concluded

Bottom line: IEDM 2025 successfully captured the state of the art in electron devices while pointing clearly toward the future. Through record participation, groundbreaking technical contributions, and a strong emphasis on emerging compute and integration paradigms, the conference demonstrated how the semiconductor community is collectively shaping the next era of electronics innovation.

Next we will cover the key presentations so stay tuned.

Contact IEDM 

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Verification Futures with Bronco AI Agents for DV Debug

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering


Advances in ATPG from Synopsys

Advances in ATPG from Synopsys
by Daniel Payne on 02-02-2026 at 10:00 am

Synopsys TestMAX family

I first learned about ATPG – Automatic Test Program Generation in the 1980s at Silicon Compilers, then continued in the 90s at Viewlogic with the Sunrise tools, so it was illuminating to get an update from Synopsys on their ATPG technology by attending a webinar. Synopsys over the years has developed a family of test tools, shown below. Srikanth Venkat Raman, Product Management Director at Synopsys introduced how their ATPG has added features to become timing-aware, power-aware and use AI to minimize test cost.

Timing-Aware ATPG

Bruce Xue, Staff Engineer described how timing-aware ATPG is made possible through fault models that account for: Transition Delays, Slack-based, Path Delay, Hold Time.

Timing-aware fault models

Using PrimeTime for Static Timing Analysis (STS) there are two ways to deal with timing violations, using functional SDC in ATPG or using violation-based SDC in ATPG.

Four challenges arise when using SDC in an ATPG flow:

  • ATPG quality of results drop when using SDC
  • Multi-cycle Path SDC is treated as False Path
  • Violation SDC can’t match with the functional timing
  • SDC written from PrimeTime has unnecessary commands for ATPG

New features in ATPG now address these challenges:

  • Native SDC reading
  • Multi-cycle Path ATPG
  • Improved timing exception ATPG quality of results
  • Write optimized SDC from PrimeTime

Five test results were presented showing  improvements between 15% to 73% on total test cycle under the same test coverage using these SDC and MCP features. You get an improved SDC flow and QoR for better coverage and reliability.

Power-Aware ATPG
Khader Abdel-Hafez, Scientist, was up next on the topic of new power-aware ATPG features. The approach is to limit sequential cell switching to generate power-friendly ATPG patterns, use functional clock-gating switches, and during shift cycles to limit sequential cell switching. During ATPG test generation the power is estimated and the power results can be compared versus PriimePower, showing excellent correlation.

Tool users can set their power budget by limiting the number of scan cells and if some patterns exceed that budget, then those patterns are skipped. For power-aware shift support there’s a software-based approach to utilize adjacent fill to manager power during shift, or a hardware-based approach by turning off some chains during ATPG or using clock divider circuitry to limit switching during shift.

The PrimePower tool can also be used with the ATPG tool as shown in this flow diagram:

 

TestMAX ATPG has features to manager your chip power during both capture and shift operations, plus the integration with PrimePower improves power estimation during ATPG and improves QoR.

AI Technology

The final presenter was Theo Toulas, R&D Principal Engineer, and his topic was using AI technology called TSO.ai (Test Space Optimization) with TestMAX ATPG. TSO.ai aims to reduce your test times without changing tool flow  while using the same CPU resources, and producing deterministic  results.

Synopsys has AI embedded into TestMAX ATPG, so there’s no separate model maintenance and it learns from multiple designs run across an internal suite, so you just need to activate the feature in TestMAX ATPG. This AI technology analyzes your design from both simulation and design structure, applies learned strategies to optimize your ATPG parameters, then optimizes multiple ATPG heuristics using targeted solver efforts, producing a reduction in test cycles.

Feedback from customer designs show an average test cycle optimization of 15.81%, while increasing CPU runtime by 2.04X. Here’s a plot showing a dozen designs with test cycle optimization benefits:

Users just add a single command to the flow, turning on the ATPG learning feature, so there’s no learning curve involved.

Summary

Test engineers strive to meet fault coverage goals while staying within power budgets and trying to uncover any timing issues plus minimize test cycles. That’s a tall order to meet and manual methods are not sufficient for the task at hand. New ATPG features added by Synopsys are addressing these critical issues for test-aware and power-aware flows, while AI is helping optimize test cycles. Working smarter, not harder is what I saw in this webinar.

Watch the archived webinar after a brief registration online.

Related Blogs


TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation

TSMC’s 2026 AZ Exclusive Experience Day: Bridging Careers and Semiconductor Innovation
by Daniel Nenni on 02-02-2026 at 8:00 am

TSMC AZ Day FAB 21

In February of 2026, Taiwan Semiconductor Manufacturing Company (TSMC) will host the TSMC AZ Exclusive Experience Day in Phoenix, Arizona, offering selected participants a rare opportunity to engage directly with one of the most advanced semiconductor manufacturing organizations in the world. The event will serve as an immersive introduction to TSMC’s Arizona operations, highlighting the company’s culture, technological leadership, and long-term commitment to building a robust U.S. semiconductor ecosystem.

Designed as more than a traditional recruitment event, the Exclusive Experience Day will provide attendees with an in-depth look at what it means to work at the forefront of advanced chip manufacturing. Participants will gain insight into TSMC’s operational philosophy, engineering rigor, and collaborative environment through curated presentations, interactive sessions, and direct engagement with company leaders and engineers. By opening its doors to a select audience, TSMC will aim to foster meaningful connections with future talent while communicating its expectations for excellence, discipline, and innovation.

The event will take place against the backdrop of TSMC’s rapidly expanding Arizona presence. As the company continues its multi-billion-dollar investment in advanced fabrication facilities in the state, Arizona is expected to become a cornerstone of U.S.-based semiconductor manufacturing. These fabs will play a critical role in supporting industries such as artificial intelligence, high-performance computing, automotive electronics, and advanced mobile devices. The Exclusive Experience Day will therefore not only introduce career opportunities but also contextualize how individual roles contribute to broader national and global technology goals.

Throughout the day, attendees will have opportunities to interact with TSMC engineers, technicians, managers, and human resources professionals. These interactions will allow participants to ask detailed questions about career paths, training programs, work expectations, and life inside a high-tech semiconductor fab. By facilitating candid conversations, TSMC will seek to demystify the realities of semiconductor manufacturing and help prospective employees assess alignment between their skills, aspirations, and the company’s mission.

A key feature of the experience will be guided exposure to fab operations and environments. Participants will learn about cleanroom protocols, advanced process technologies, and the precision required to manufacture chips at nanometer scales. For many attendees, this will be their first opportunity to understand the discipline and teamwork required to operate within one of the world’s most sophisticated manufacturing settings. This hands-on exposure will reinforce the idea that semiconductor manufacturing is both technically demanding and deeply collaborative.

Beyond technical learning, the 2026 TSMC AZ Exclusive Experience Day will emphasize culture and community. TSMC will present its core values, including long-term thinking, continuous improvement, and mutual trust, while also highlighting its investment in employee development and local engagement. As the company continues to integrate into the Arizona community, workforce development and talent cultivation will remain central to its strategy. Events like this will help build a shared sense of purpose between TSMC and the people who will support its operations for decades to come.

The timing of the event will align with ongoing hiring and workforce expansion efforts, making it especially relevant for students, early-career professionals, and experienced engineers seeking to participate in a once-in-a-generation manufacturing build-out. For attendees, the experience will offer clarity on expectations, opportunity, and impact, providing a realistic and inspiring view of what a career at TSMC Arizona could entail.

Bottom line:  the 2026 TSMC AZ Exclusive Experience Day will represent more than an introduction to jobs or facilities. It will serve as an invitation to join a transformative effort in advanced manufacturing, where individual talent and global technology leadership intersect. By bringing future employees inside its vision, TSMC will reaffirm that the success of the semiconductor industry depends not only on capital and equipment, but on the people who design, build, and sustain it.

CONTACT TSMC

Also Read:

The Chronicle of TSMC CoWoS

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


DAC – The Chips to Systems Conference 2026

DAC – The Chips to Systems Conference 2026
by Daniel Nenni on 02-02-2026 at 6:00 am

DAC 2026 Long Beach

The Design Automation Chips to Systems Conference is the preeminent international event for professionals involved in electronic design, system architecture, and EDA.  Formerly known simply as the Design Automation Conference or DAC has evolved over more than six decades into a forward-looking forum that spans the entire spectrum from silicon chips to complex systems hence its current tagline “Chips to Systems.”

I have attended DAC since the 1984 event in Albuquerque New Mexico right out of college. It is my favorite conference and the absolute best networking event for the EDA IP industry.  It is a deep technical and academic program with numerous networking events. I have done book signings, panels, and presentations at DAC for many years and hopefully many more. My beautiful wife and I will be there enjoying the sun and fun in Long Beach. First on our list is to tour the Queen Mary!

This year DAC will be held at the Long Beach Convention Center July 26 through July 29, 2026. This coastal venue marks a vibrant new chapter for the conference, bringing its signature blend of deep technical content, industry showcases, and professional networking to Southern California.

Scope and Mission

DAC serves a broad and diverse audience that includes system architects, chip designers, software engineers, validation specialists, and researchers from industry, academia, and government labs. Participants come from thousands of organizations worldwide to explore breakthroughs in design methods, automation tools, and emerging technologies.

The core mission of the conference is to advance innovation in how electronic systems are conceived, implemented, verified, and integrated. Covering everything from transistor-level circuit design to large-scale systems deployment and optimization, DAC stimulates cross-disciplinary dialogue and collaboration among specialists in hardware, software, and automation.

Technical Program and Tracks

The heart of DAC’s value lies in its technical program, which for 2026 will include a wide range of sessions, panels, and presentations addressing both fundamental research and practical engineering challenges. Key tracks include:

  • Research Track, highlighting original scientific and engineering breakthroughs.
  • Engineering and Practice Tracks, which focus on real-world design problems, tools, and workflows.
  • Workshops and Tutorials, offering more focused, hands-on learning and discussion opportunities.
  • Special Sessions and Panels, bringing varied perspectives on hot topics such as AI integration in hardware design and security.

Technical sessions selected by expert committees cover emerging topics ranging from AI-driven design automation to chiplets and exotic system architectures. There is also growing interest in fields like quantum computing hardware and cloud-native design environments.

Papers accepted for presentation are typically published in the conference proceedings and indexed in major digital libraries such as IEEE Xplore and the ACM Digital Library, providing a lasting academic impact.

Exhibition and Industry Engagement

Alongside the technical tracks, DAC features a large exhibition floor where leading companies in EDA, semiconductor IP, hardware tooling, and services demonstrate the latest technologies. Traditionally, around 150 exhibitors showcase solutions that span design automation, verification, architecture tools, and integrated hardware components.

The exhibition fosters direct interaction between vendors, users, and innovators, promoting knowledge transfer and potential partnerships. In addition to traditional booths, the event includes exhibitor forums and pavilion sessions where companies present deeper technical content right on the show floor.

Networking and Career Impact

DAC is also a vital networking venue. Attendees can connect through social events, informal meetups, and mentoring sessions, making it a critical space for early-career engineers and researchers to build relationships with established industry leaders. The diversity of participants—from startups to global corporations, and from graduate students to senior executives—creates a rich ecosystem for idea exchange.

In recent years, DAC has increasingly emphasized real-world impact and collaboration, encouraging submissions and participation that bridge academic research with industrial practice. This balance helps ensure that new methodologies and tools can move from concept to implementation faster.

Trends and Themes for 2026

Though specific session topics are continually finalized, some clear themes have emerged in the lead-up to 2026:

  • AI in EDA and System Design – exploring how machine learning, including agentic and generative models, is reshaping design flows.
  • Security and Trustworthy Systems, particularly as chips are embedded in critical infrastructure.
  • Chiplets and Advanced Integration, reflecting modular hardware approaches.
  • Cross-Domain Integration, such as hardware-software co-design and cloud-driven design methodologies.

Bottom line: DAC 2026 continues a long tradition of being at the forefront of electronic design innovation. It combines rigorous technical content, broad industry participation, and a global community of practitioners and researchers—all under the theme of “Chips to Systems.” Whether you are an engineer, researcher, or business leader, DAC offers a unique opportunity to learn, connect, and help shape the future of electronic systems.

Register for DAC.

Also Read:

Pushing the Packed SIMD Extension Over the Line: An Update on the Progress of Key RISC-V Extension

Verification Futures with Bronco AI Agents for DV Debug

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering


CEO Interview with Naama BAK of Understand Tech

CEO Interview with Naama BAK of Understand Tech
by Daniel Nenni on 02-01-2026 at 6:00 pm

Naama Bak Headshot

Naama BAK is an entrepreneur with 15 years of experience in tech. He is the founder of Understand Tech, a generative AI platform for enterprises, and Trustii.io, a machine learning platform for data science challenges. He previously held roles at NXP Semiconductors, Orange, and Safran, working in cybersecurity across research, development, product marketing, and business development.

Tell us about your company

Understand Tech is a French and American AI company focused on building scalable, secure, and private AI solutions for the semiconductor industry. Having spent more than a decade in the semiconductor industry, I am keenly aware of the challenges these companies face. As product complexity continues to increase, employees and customers struggle to understand product functionality. Understand Tech was founded to address this complexity. We have created AI tools that enable virtual SMEs which can answer support and engineering questions, generate test plans, and solve a host of other problems caused by the sheer complexity of semiconductor products. Simply stated, we turn technical documentation into usable knowledge, and knowledge into decisions.

What motivated you to start this company?

Not long ago, while reviewing a technical specification, I spent over an hour trying to answer a simple, but critical technical question. How is a Matter commissioner authenticated during commissioning? The information existed in the Matter smart home standards documentation, but it was fragmented, buried in dense documentation, and costly to extract.

That moment reflected a broader reality I’ve seen throughout my career in semiconductors and deep-tech; technical documents are foundational to how products are built, validated, and secured, yet they remain difficult to operationalize. Engineers and decision-makers spend too much time searching, interpreting, and cross-checking information that should be immediately actionable.

As generative AI matured, the opportunity became clear. We didn’t set out to build another chatbot. Instead, we created a system that understands complex technical content and delivers reliable, domain-specific answers, grounded in the source material enterprises already trust.

Together with a team of experienced engineers, we built Understand Tech to bridge the gap between advanced AI capabilities and the real, day-to-day needs of technical organizations: turning documentation into usable knowledge, and knowledge into decisions.

What problems are you solving?

Our AI tools simplify complexity without compromising security and privacy. They can be used to create virtual subject matter experts (SMEs) for pre-sales support, customer support, for assisting distributors in understanding which product to use for a given use case, or to generate test plans.

What keeps your customers up at night?

They are worried about maintaining data privacy and security as they adopt AI solutions. In many cases, AI solutions will be ingesting some of their most sensitive assets, so data protection and security are paramount.

Describe your experience in starting and growing Understand Tech

It has been an exciting and challenging two years. Customer reception of our solution has been great, which is always very rewarding. Going from NXP, a company of more than 30,000 employees, to what started as a two person company is a massive shift. There was literally no one besides myself and my cofounder to handle everything from product development and marketing to IT and finance. We now have an amazing group of people on our team, which makes it fun to go to work every day.

What application areas are your strongest?

One of the earliest use cases for our solution was implementing virtual SMEs for pre-sales and customer support. For example, a few companies we support are Synaptics and SEALSQ. Our products are in use on their homepage, as we are the AI engine behind their website chatbot. As an early use case, this is one of the most mature features of our product. We also have a very strong solution for test case generation for semiconductor products.

What does the competitive landscape look like and how do you differentiate?

While other companies are beginning to provide AI solutions, our main competition remains OpenAI and ChatGPT. Other solutions lack the ability to output long, structured documents. One example is test plans. Most importantly, they fail to provide enterprise grade security features our customers require.

What new features/technology are you working on?

We are just about to launch our new “AI in a Box” solution on February 12 at WAICF in Cannes, France. This solution brings all the power of our cloud platform to on-premise deployments. This allows companies to implement advanced AI entirely offline and fully inside their security perimeter. Our system is pre-configured and production-ready: simply plug it in, connect to your network, and start building custom AI solutions with no installation or configuration required. Enjoy the flexibility and performance of our cloud platform while maintaining complete control over security and deployment.

Your solution seems broadly applicable, why did you choose to target Semiconductor companies?

Our solution is quite flexible and can be used by anyone that requires privacy, security & customisation when implementing complex AI models. We are working with a few companies outside of the semiconductor industry, but we focus on Semiconductor companies for several reasons. First, that is where I spent most of my career. I know many of the companies in the semiconductor space and, more importantly, I understand the challenges these companies face. Our solution was designed to help people working with complex products, and semiconductor companies have some of the most complex products in the world.

How do customers normally engage with your company?

You can learn more about us from our website at understand.tech or our LinkedIn page here. We are happy to schedule a demo or answer any questions. Our solution can be deployed through our private cloud, through our customer’s own private cloud, or through an on-premise appliance.

CONTACT Understand.Tech

Also Read:

CEO Interview with Dr. Heinz Kaiser of Schott

CEO Interview with Moshe Tanach of NeuReality

2026 Outlook with Paul Neil of Mach42


CEO Interview with Echo Yang of CSCERAMIC

CEO Interview with Echo Yang of CSCERAMIC
by Daniel Nenni on 01-31-2026 at 4:00 pm

Echo yang


Echo Yang is the CEO of CSCERAMIC, a China-based manufacturer specializing in advanced ceramic materials and precision ceramic components for industrial and laboratory applications. With a background spanning international trade, manufacturing coordination, and engineering-driven supply chain development, Echo leads CSCERAMIC’s strategy in high-purity alumina ceramics, laboratory consumables, and custom-engineered ceramic solutions.

Under his leadership, CSCERAMIC has evolved from a traditional ceramic supplier into a technically focused manufacturer emphasizing material stability, dimensional control, and long-term performance in demanding operating environments. The company serves customers across materials testing, thermal processing, chemical systems, and high-temperature industrial equipment markets.

Tell us about CSCERAMIC.

CSCERAMIC is an advanced ceramics manufacturer focused on alumina-based ceramic materials and laboratory ceramic consumables. Our core products include high-purity alumina tubes, rods, crucibles, and custom ceramic components designed for thermal analysis, material testing, chemical processing, and high-temperature industrial systems.

Our approach is not centered on selling standard catalog items, but on understanding how ceramics behave under real operating conditions. Many of our customers face challenges related to thermal cycling, chemical exposure, and dimensional stability over long service periods. We position ourselves as a manufacturing partner that helps address those issues through material selection, process control, and precision machining.

What problems are you solving for your customers?

Many industrial and laboratory systems operate under conditions that push materials to their limits—high temperatures, aggressive chemical environments, and continuous operation cycles. Traditional materials often degrade gradually, leading to misalignment, contamination, or inconsistent performance.

We help customers reduce these risks by providing ceramic components that maintain structural integrity, thermal stability, and chemical resistance over time. In applications such as thermal analysis instruments, furnace systems, and chemical equipment, even small material changes can affect measurement accuracy or system reliability. Our goal is to eliminate material-related uncertainty so customers can focus on system performance rather than component replacement.

Where do you see your strongest application areas today?

Our strongest applications are in laboratory analysis and high-temperature industrial systems. This includes:

  • Thermal analysis consumables for DSC, TGA, and related instruments
  • High-purity alumina tubes and rods for furnaces and thermal processing equipment
  • Ceramic components for chemical and corrosive environments
  • Custom ceramic parts requiring tight dimensional tolerances

These applications share a common requirement: materials must behave predictably under heat and chemical exposure. That is where advanced ceramics, particularly alumina-based materials, offer clear advantages.

What keeps your customers up at night?

Reliability and consistency. Customers are concerned about gradual degradation rather than catastrophic failure. Issues such as micro-cracking, thermal distortion, or surface contamination can slowly compromise system accuracy or uptime.

Another concern is supply consistency. For many users, replacing ceramic components is not simply a matter of sourcing a part—it can require recalibration, validation, or downtime. Customers want confidence that the parts they receive today will behave the same way six months or two years from now.

How do you differentiate CSCERAMIC from other ceramic suppliers?

The main difference is our emphasis on engineering collaboration and process consistency. We spend significant time understanding how a ceramic component is used, not just how it is manufactured.

Rather than pushing standardized products, we focus on:

  • Stable raw material sourcing and purity control
  • Controlled sintering and machining processes
  • Repeatable dimensional and surface quality
  • Application-driven customization

This allows us to support customers who need ceramics to perform reliably over long operating cycles rather than simply meeting initial specifications.

What technology or capability improvements are you currently working on?

We are continuously improving our machining precision, surface finishing, and inspection methods for alumina ceramics. Small improvements in surface quality or dimensional control can significantly reduce wear, particle generation, or thermal stress in real applications.

We are also investing in better internal testing and validation workflows to better simulate customer operating conditions. This helps us identify potential failure modes earlier and improve component design before production scaling.

How do customers typically engage with CSCERAMIC?

Most engagements start with a technical discussion. Customers usually bring drawings, operating parameters, or performance challenges rather than just a part number. From there, we work together to refine material choice, tolerances, and design details.

We support both prototyping and long-term production, and we place a strong emphasis on communication throughout the process. Our website, https://www.csceramic.com, serves as an entry point for customers to understand our capabilities and initiate technical discussions.

Final thoughts?

As industrial systems become more precise and demanding, materials can no longer be treated as interchangeable commodities. Advanced ceramics play a quiet but essential role in system reliability, accuracy, and lifecycle performance.

Our focus at CSCERAMIC is to ensure that ceramic components support—not limit—the performance of the systems they are part of. That mindset will continue to guide how we develop our materials, processes, and customer partnerships.

CONTACT CSCERAMIC

Also Read:

CEO Interview with Dr. Raj Gautam Dutta of Silicon Assurance

CEO Interview with Naama BAK of Understand Tech

CEO Interview with Dr. Heinz Kaiser of Schott


Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle

Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle
by Daniel Nenni on 01-30-2026 at 10:00 am

Daniel is joined by Mark Kuemerle, Vice President of Technology, Custom Cloud Solutions at Marvell. Mark is responsible for defining leading-edge ASIC offerings and architects system-level solutions. Before joining Marvell, Mark was a Fellow in Integrated Systems Architecture at GLOBALFOUNDRIES and has held multiple engineering positions at IBM. He has authored numerous articles on die-to-die connectivity and multichip systems and holds several patents related to low-power technologies and package integration.

Mark begins this far-reaching and informative discussion with the observation that the parameter driving the overall budget for advanced data centers is no longer money, but rather the available power to drive the massive connectivity of AI accelerators. Dan explores the significant work Marvell is doing to address this power constraint with its advanced die-to-die technology. Mark describes the impact and benefits of technologies such as bi-directional interfaces, redundancy, and methods to power down parts of the interface when they are not needed. He explains how these techniques lower power budgets, improve yield, and reduce total cost of ownership.

Mark discusses how Marvell balances its custom interface technology with popular standards such as UCIe. He also comments on the outlook for future die-to-die interfaces with the addition of integrated optics.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Taming Advanced Node Clock Network Challenges: Jitter

Taming Advanced Node Clock Network Challenges: Jitter
by Mike Gianfagna on 01-30-2026 at 6:00 am

Taming Advanced Node Clock Network Challenges Jitter

Clock jitter rarely fails in obvious ways. In advanced-node designs, its impact is often indirect, emerging through subtle timing uncertainty, interaction with power delivery noise, and compounding effects across large clock networks. These behaviors can quietly erode margin and predictability, even when conventional sign-off checks appear to pass. As a result, jitter that was previously absorbed through conservative margining now directly determines whether designs meet silicon targets or require costly late-stage rework, including frequency derates, ECO churn, or delayed product ramps.

As discussed, ClockEdge focuses on this class of problem with a unique approach that delivers deep insight, helping teams balance the often subtle and conflicting requirements to build reliable clock networks across all operating conditions. ClockEdge is publishing a series of white papers that examine real clock failure mechanisms and practical ways to address them. This installment focuses on the unique behaviors and risks associated with clock jitter.

Clock Jitter – What Changed?

Jitter was once dominated by isolated sources such as PLL phase noise or local buffer variation. In advanced design nodes today, it manifests as time-varying, distributed electrical behavior that evolves as the clock propagates through deep, heterogeneous clock networks. Power delivery noise, interconnect parasitics, non-linear device behavior, and topology-dependent amplification increasingly shape clock edge uncertainty over time across the system. The varied and locally influenced sources of jitter are illustrated in the figure below.

Clock jitter is a distributed electrical phenomenon

What the figure illustrates are distinct jitter profiles that emerge at different locations in the clock network due to local loading, power integrity conditions, and downstream topology. It is important to note that worst-case jitter often appears far from the original source.

As clocks traverse regions with different power domains, clock signals can amplify, attenuate, or reshape in non-intuitive ways. In multi-voltage designs, clock gating and localized activity bursts further exacerbate this behavior, creating correlated, time-dependent jitter patterns that cannot be captured through simple budgeting or worst-case assumptions.

Why Traditional Methods Fall Short

Conventional jitter metrics such as absolute jitter, period jitter, and cycle-to-cycle jitter provide useful measurements at individual observation points, but they do not capture how jitter evolves dynamically and spatially across a clock network. These metrics implicitly assume uniform propagation and limited interaction with the surrounding electrical environment.

Approaches such as static timing analysis (STA) and margin-based methodologies struggle to close jitter in advanced-node clock networks. It is important to understand that STA relies on delay abstractions and fixed uncertainty values that infer clock behavior rather than computing it directly, implicitly assuming that jitter is spatially uniform and temporally stable.

At advanced nodes, where jitter is dominated by localized power distribution effects, non-linear device behavior, and topology-dependent amplification, this assumption no longer holds. Accurate jitter closure requires moving beyond global budgets and worst-case assumptions to waveform-level, electrically accurate analysis across multiple levels of the clock network. The white paper examines these challenges in detail across per-gate, per-path and per-noise profile analysis.

How ClockEdge Addresses the Latest Jitter Challenges

The key capability offered by ClockEdge is to move beyond budgeting and inference to direct time-domain electrical computation.  The size of modern clock networks and the need to observe behavior over many cycles put conventional SPICE analysis out of reach. With its Veridian™ suite, ClockEdge delivers SPICE-accurate, full-clock electrical verification at scale.  Within the suite, vJitter directly computes clock waveforms over many cycles, capturing how jitter propagates, correlates, and interacts with real electrical effects across the entire clock network.

This large-scale, highly accurate analysis exposes subtle but highly impactful time-varying timing behavior that clock jitter can create. The white paper explains how these behaviors can be identified and addressed early, before they surface as late-stage design headaches. As described in the white paper:

By computing actual clock behavior rather than assuming it, ClockEdge allows design teams to achieve confident sign-off, preserve performance targets, and reduce the risk of late-stage surprises in advanced-node systems.

To Learn More

The clock network is the largest and most consequential network in most advanced designs. Improving visibility into clock behavior can directly impact performance, reliability and overall product predictability.  If you are involved in advanced chip design, this white paper explains how waveform-accurate jitter analysis enables more confident clock sign-off and reduces late-stage risk. You can access your copy of the white paper here. And that’s how to tame advanced node clock network jitter challenges.

Also Read:

Taming Advanced Node Clock Network Challenges: Duty Cycle

How vHelm Delivers an Optimized Clock Network

ClockEdge Delivers Precision, Visibility and Control for Advanced Node Clock Networks


Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact

Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact
by Daniel Nenni on 01-29-2026 at 12:00 pm

Synopsys AMD Agentic AI Honor

Synopsys and AMD were recently selected by the World Economic Forum for inclusion in the WEF’s MINDS (Meaningful, Intelligent, Novel, Deployable Solutions) AI program, recognizing their leadership and real-world impact in applying generative and agentic AI to semiconductor design and engineering. This distinction places them among a distinguished global cohort of organizations pioneering AI innovation with measurable outcomes in complex technical domains.

The MINDS program is part of the Forum’s broader AI Global Alliance initiative, which seeks to identify and amplify AI solutions that are not just technologically advanced but have tangible, deployable impact. Rather than focusing on pilot projects or theoretical research, MINDS highlights implementations that are already making a difference in how industries operate and Synopsys and AMD’s work in semiconductor design stood out as a clear example of this shift.

Why This Recognition Matters

Traditionally, semiconductor design has relied on manual, labor-intensive workflows anchored in expert engineers creating and verifying designs line by line. But as chips become more complex, with billions of transistors and multi-disciplinary integration requirements, these workflows have faced scaling limits. Generative and agentic AI, AI that can autonomously perform multi-step tasks and adapt workflows, offers a powerful new paradigm for accelerating these processes while preserving quality and reducing costs.

By honoring Synopsys and AMD, the WEF is acknowledging that AI is not just a future promise for chip design, it’s already producing real, measurable business and engineering outcomes. Their work demonstrates how AI can amplify human expertise instead of replacing it, enabling engineers to explore design spaces faster, automate repetitive tasks, and focus on higher-value decisions.

Synopsys: AI-Driven EDA and Agentic Workflows

Synopsys, a global leader in EDA software and services used by the world’s semiconductor companies to design, verify, and optimize chips, has been integrating generative AI and reinforcement learning deeply into its toolset. Through its Synopsys.ai suite, the company has introduced AI-assisted capabilities that help engineers at various phases of the design flow, from RTL development and verification to signoff and optimization.

These AI capabilities include AI “copilots” that assist with code and script generation, knowledge assistants that expedite learning and problem resolution, and agentic systems that can manage multi-step workflows. In collaboration with partners like Microsoft, Synopsys is also advancing toward more autonomous EDA workflows under the concept of AgentEngineer™,  a vision for AI agents capable of executing complex, multi-agent tasks that previously required extensive human intervention.

This focus on agentic AI marks a departure from simple generative tools to systems that can coordinate iterative tasks, make decisions across multiple steps, and adapt to evolving design contexts, a capability that is especially valuable in semiconductor development where design constraints, tradeoffs, and verification requirements are highly intricate.

AMD: Systems-Level AI Integration

For its part, AMD has been applying these advanced AI workflows in real semiconductor product development. By partnering with Synopsys, AMD has incorporated reinforcement learning and generative AI tools directly into its chip design and verification processes, delivering substantial benefits in productivity and performance. According to the WEF case study on the MINDS award, this collaboration has enabled AMD to double productivity across design stages, expand design exploration, reduce overall design costs significantly, and shrink time to signoff, all outcomes that directly impact competitiveness in a fast-moving market.

These gains are especially notable given the rising pressures facing the semiconductor industry. Global demand for advanced chips continues to grow rapidly while the pool of experienced engineers has not kept pace. AI-augmented design workflows provide a way to leverage expert knowledge at scale, enabling more efficient use of human talent and AI assistants working together.

Looking Ahead: AI as a Strategic Enabler

The recognition from the World Economic Forum underscores a broader shift in how AI is perceived in high-technology sectors, from a promising research topic to a strategic enabler of real-world innovation and competitive advantage. By spotlighting Synopsys and AMD, the WEF is highlighting that complex fields like chip design can benefit from AI not just conceptually, but with quantifiable improvements in engineering workflow efficiency, product quality, and time to market.

Bottom line: As AI technologies continue to mature, other organizations in semiconductor design, manufacturing, and systems engineering will likely follow similar paths combining human ingenuity with scalable AI workflows to tackle the ever-increasing complexity of next-generation computing systems.

Also Read:

Synopsys’ Secure Storage Solution for OTP IP

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!