You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
In the complex world of IC design, electrical verification has emerged as a critical yet often overlooked bottleneck. Aniah’s upcoming webinar on December 4, 2025, titled “Electrical Verification: The Invisible Bottleneck in IC Design,” sheds light on this issue, introducing their groundbreaking OneCheck® solution. As IC designs grow exponentially per Moore’s Law, schematic size and complexity surges, making electrical errors inevitable. Traditional Electrical Rule Checking (ERC) struggles to keep pace, leading to a widening verification gap. The Wilson Research Group’s 2024 highlights that 86% of IC/ASIC designs require respins due to design flaws, costing millions in re-spins, mask sets, and delayed time-to-market. Late-stage bugs not only waste resources but also risk missing market windows, eroding competitive advantage.
Electrical Rule Checking (ERC) fundamentals involve ensuring designs comply with electrical constraints like connectivity, voltage compatibility, current limits, and biasing. It detects issues such as floating nodes, short-circuits, power-ground errors, and device overstress, validating electrical integrity pre-tape-out to boost reliability and yield. However, traditional methods fall short amid rising transistor counts, design complexity and stagnant verification resources, particularly in headcount for ERC engineers.
Aniah’s OneCheck® represents a paradigm shift: the first static transistor-level ERC tool for “right-on-first-silicon” ICs. Designed for adoption, users benefit from the automated PDK & power setup, fast run time, and advanced cross-probing with Cadence Virtuoso. Independent of foundry tech-files, it empowers designers and CAD teams with unprecedented error coverage, drastically reducing false positives – up to 150x in some cases via propagation path analysis and system-conditional checks. For mature and advanced nodes, it handles billion-transistor designs in minutes.
Breakthrough features include full-chip analysis, across all possible power combinations for a 100% reliable error detection. OneCheck® uniquely identifies errors like conditional HiZ, missing level shifters, bulk/diode leakages, floating gates/bulks/diodes, electrical overstress, and ESD gate-to-supply issues. Its pseudo-DC engine enumerates all DC states for formal proofing, verifying complex isolation on-the-fly and rejecting impossible conditions to minimize noise.
Integrated natively with Cadence Virtuoso, OneCheck® fits early in the flow from post-schematic netlist to full-chip bridging analog, digital, and mixed-signal domains. Users praise its simplicity: “Aniah lets designers achieve excellent design quality with minimum effort”. It “starts at check-and-save, converging to 100% quality by sign-off,” as noted by design managers and verification engineers.
Bottom line: By focusing on root causes, grouping thousands of errors into a limited number of root causes, OneCheck® accelerates debugging, enhances productivity, and cuts costs. Aniah emphasizes 100% error coverage and first-in-class support. In an industry where delays mean millions, adopting OneCheck® ensures zero electrical errors on first silicon spin, fostering confidence from design to tape-out.
The explosive growth of AI and accelerated computing is placing unprecedented demands on system-on-chip (SoC) design. Modern AI workloads require extremely high bandwidth, ultra-low latency, and energy-efficient data movement across increasingly heterogeneous architectures. As SoCs scale to incorporate clusters of CPUs, GPUs, NPUs, memory subsystems, and domain-specific accelerators, the Network-on-Chip (NoC) becomes the backbone that determines system performance, power efficiency, and overall scalability. This presentation, featuring Arteris and Aion Silicon, explores the key considerations for architecting next-generation NoCs optimized for AI-driven designs.
We begin with a deep look at AI communication patterns. Unlike traditional compute pipelines, AI workloads exhibit bursty, data-parallel behavior, with strong dependencies between compute engines and shared memory resources. Understanding tensor flow, on-chip bandwidth hotspots, and memory reuse patterns is essential for mapping compute, memory, and accelerator elements onto the SoC floorplan. This analysis directly influences NoC topology, routing strategies, QoS configuration, and buffer sizing.
NoC construction strategies form the core of the discussion. Tree, mesh, and hybrid topologies are evaluated not in isolation but through the lens of physical awareness—how real-world floorplans, die size, and chip aspect ratios dictate the most practical choices. AI-oriented SoCs often employ tiling strategies that replicate compute-memory clusters across the die. The NoC must scale with this modularity while maintaining predictable performance across thousands of concurrent data flows. The tradeoffs between bandwidth, power efficiency, and area become especially acute at advanced nodes, making architectural decisions increasingly consequential.
Another major theme is system complexity. Multi-die integration, enabled by 2.5D and 3D packaging, introduces new layers of NoC design challenges. Cross-die communication must handle longer physical paths, varying thermal conditions, and partitioned power domains. Distributed AI processing elevates the need for robust memory-coherency models—balancing performance with the overhead of maintaining shared state across heterogeneous compute engines. Designers must also account for dynamic voltage and frequency scaling (DVFS), power gating, and isolation across multiple domains, all of which impact NoC behavior.
To manage this complexity, performance simulation and KPI-driven evaluation are essential. The session discusses practical methodologies for modeling throughput, latency, congestion, and power consumption using a combination of transaction-level simulation, trace-based modeling, and workload-driven analysis. These tools allow architects to quantify tradeoffs early, before committing to RTL or physical implementation, ensuring the NoC meets system performance targets.
Finally, the presentation highlights the strategic role of NoC partitioning and restructuring. As designs approach physical limits, achieving timing closure becomes increasingly challenging. Partitioning the NoC into hierarchical or modular segments can simplify routing, reduce wire length, and improve predictability, especially in large AI-focused SoCs.
With insights from Andy Nightingale, VP of Product Management and Marketing at Arteris, and Piyush Singh, Principal Digital SoC Architect at Aion Silicon, attendees will gain a practical, experience-driven perspective on designing scalable, high-performance NoCs. Their combined expertise—spanning system IP, NoC products, performance modeling, and complex heterogeneous SoC architecture—provides a grounded framework for building the next generation of AI-optimized silicon.
In the fast-evolving world of electronic design automation (EDA), where complexity multiplies with every nanometer shrink and AI integration, data silos are the silent killers of innovation. Keysight Technologies, a leader in design and test solutions, tackles this head-on with their webinar “From Silos to Systems, From Data to Insight” (AM Session), scheduled for December 17, 2025, at 10:00 AM PST. This free, one-hour virtual event promises to equip engineering leaders with actionable strategies to dismantle fragmented data ecosystems and unlock AI-driven efficiencies. With over a decade of expertise guiding the charge, presenter Pedro Pires—Product Manager at Keysight—brings his Master’s in Microelectronics Engineering and passion for MLOps to the forefront.
Targeted at Engineering Directors, VPs of R&D, EDA Managers, CAD Professionals, Design and Verification Leads, IT/Data Governance Executives, and AI/ML Engineers, the webinar addresses a universal pain point: disconnected design data that hampers collaboration, traceability, and reuse. In today’s global R&D landscapes, teams grapple with multi-site workflows, compliance demands, and the explosion of IP blocks—issues that Pires notes amplify “the biggest challenges today… managing complexity, ensuring governance, and preparing data for AI.” Keysight’s solution? Their SOS Enterprise Collaboration platform, positioned as the “backbone for modern engineering enterprises.”
The agenda is laser-focused on transformation. Attendees will explore transitioning from isolated silos—think scattered design files, inconsistent metadata, and siloed tools—to a unified, governed platform that fosters seamless collaboration. Pires will delve into how structured, traceable data becomes the fuel for analytics, machine learning, and AI copilots, slashing rework and accelerating design cycles. Imagine verification engineers spotting anomalies in real-time or data scientists training models on clean, enriched datasets without the drudgery of manual reconciliation. Keysight SOS shines here, offering hybrid deployment for secure, scalable operations across borders, complete with audit-ready governance and AI-ready infrastructure.
What sets this apart are the tangible benefits. Organizations adopting such systems report up to 30% faster time-to-market, reduced IP reuse errors, and empowered cross-domain visibility—critical in an era where AI agents could automate 40% of routine design tasks, per industry forecasts. Pires, drawing from his work connecting Keysight’s global teams, will share practical roadmaps: identifying shadow data in legacy tools, implementing metadata standards, and integrating with EDA suites like Cadence or Synopsys for end-to-end traceability. A standout feature is the platform’s emphasis on “agentic workflows,” where AI not only analyzes but anticipates design bottlenecks, turning raw data into predictive insights.
Beyond the session, Keysight sweetens the deal with an EM session for European audiences on the same day, ensuring global accessibility. Registration is straightforward—simply visit the event page and fill out the form (JavaScript required)—with recordings promised for those who can’t attend live. As Pires emphasizes, “That’s exactly why Keysight SOS becomes critical.” This webinar isn’t just a talk; it’s a catalyst for engineering teams to evolve from reactive data hoarders to proactive insight machines.
In a semiconductor industry racing toward 2nm nodes and beyond, ignoring silos risks obsolescence. Join Keysight on December 17 to bridge the gap from chaos to clarity, and position your organization at the vanguard of AI-accelerated design. The future of EDA isn’t in more data—it’s in making it work together.
It was very clear at the recent 2025 TSMC OIP Ecosystem Forum that the semiconductor I/O landscape has undergone a profound transformation over the past 25 years, evolving from simple general-purpose input/output (GPIO) cells in 180nm nodes to highly complex, multi-protocol, feature-rich libraries in advanced 16nm and 22nm processes. As Stephen Fairbanks, CEO of Certus Semiconductor, outlines in his presentation, modern I/O design is no longer about basic functionality—it is about adaptability, optimization, and market-specific performance.
Historically, a single foundation I/O library per process node sufficed, offering variants of classic GPIO (push-pull LVCMOS) or open-drain I/O (ODIO) for protocols like I2C or SMBus. These were adequate for early 2000s applications in telecom and consumer electronics. Today, explosive growth in mobile computing, IoT, AI at the edge, automotive infotainment, and autonomous systems demands far greater flexibility. A single ASIC may now serve both automotive (requiring CAN) and cellular (needing I3C) markets without dedicated pins. This convergence has birthed the GPODIO, a hybrid I/O capable of operating in both CMOS and open-drain modes, supporting LVCMOS, SPI, I3C, JTAG, and fail-safe open-drain standards.
The GPODIO exemplifies multi-protocol I/O, a cornerstone of modern design. It features configurable output drivers that switch between high-speed GPIO (Tfall < 5ns, Zout 33–120 Ω) and slow-slew open-drain modes (Tfall 20–1000ns, IOL 3–20mA). Input mode control (IMC) supports multiple VIH/VIL/hysteresis thresholds, while fail-safe operation ensures reliability even with push-pull drivers and on-die terminations. Voltage support has also expanded: modern GPIOs handle VDDIO from 1.2V to 3.3V, core supplies down to 0.65V, and external ODIO voltages up to 5V—all in one cell.
Even more advanced are “Super” I/Os, macro cells combining two single-ended or one differential pair, supporting over 20 standards including LVDS, MIPI, HSTL/SSTL with on-die termination (ODT), and POD. These are essential for high-performance computing (HPC) and 5G infrastructure.
Another major trend is I/O library variants. A single GPIO design in 22nm might spawn five libraries—PM22 (ultra-low power IoT, 0.14nA leakage), MM22 (balanced mobile), OG22 (automotive-grade, 8kV HBM), and EG22/TG22 (HPC with staggered footprints for density). Each optimizes speed, leakage, ESD (2kV to 16kV HBM, 6A to 16A CDM), and interface support (SPI, RGMII, eMMC). Foundry catalogs now offer multiple libraries per node, differentiated by metal stack, voltage, and market focus. Product architects must align library selection with application goals—using a low-power IoT library for HPC would compromise performance.
Analog and RF I/O have also matured. Where once designers built custom pads, modern libraries include pre-characterized cells: low-capacitance RF pads (<75fF, >8kV HBM), matched LVDS/HDMI pairs, and high-voltage analog I/Os up to 20V. This reduces design risk and time-to-market.
Emerging die-to-die interfaces for 2.5D/3D packaging and chiplets introduce ultra-low-power, high-density I/Os (e.g., 4Gbps in 16nm, <0.1nA DC, 10×20µm footprint), critical for multi-die AI and memory stacks.
Verification complexity has skyrocketed. A classic GPIO required ~135 PVT corners; a modern multi-voltage, multi-mode GPODIO demands over 12,000 corners, including zero-volt and power-down modes. Accurate .LIB modeling is now a major engineering challenge.
Bottom line: I/O design has shifted from monolithic, one-size-fits-all libraries to a sophisticated ecosystem of optimized, configurable, and market-tailored solutions. The days of defaulting to the “base” foundry library are gone. Success in 2025 requires deep understanding of application requirements, careful library selection, and robust verification—ensuring performance, power, reliability, and cost align with diverse, demanding end markets.
Certus Semiconductor, with over 30 process node I/O libraries and expertise in ESD, RF, and multi-protocol design, stands at the forefront of this evolution.
Daniel is joined by Fabrizio Del Maffeo, CEO and co-founder of Axelera AI, a Netherlands-based startup delivering the world’s most powerful and advanced solutions for AI at the edge.
Fabrizio describes the significant momentum Axelera AI has achieved in the market, with over 350 customers. Dan then explores the company’s newest product, Europa with Fabrizio. He discusses the 3-5X performance increase delivered by this new product, along with its cost and power benefits. Fabrizio sees this chip opening new markets and new capabilities for AI and the edge. He believes Europa will have a significant impact in the market
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Bug localization continues to be a challenge for both bug triage and root-cause analysis. Agentic approaches suggest a way forward. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.
It’s been a while since we last looked at localization. Agentic activity around debug, growing more popular, prompts another look. The paper’s method is based on building a graph view of a code base, where nodes are functions, found in other objects (classes, files, directories) and edges are relations between nodes like contain, invoke, import, inherit.
One point I find interesting is that this is a purely static analysis of the codebase supported by training against (GitHub) bug and resolution reports. There is no attempt that I saw to probe dynamic behavior to test hypotheses. The authors claim high levels of accuracy at file, module and function levels.
Paul’s view
We look again this month at LLM-based code localization – finding the relevant file/class/function to fix a bug or make an enhancement. Much of the focus for ongoing research, both in academia and industry, is on the retrieval part of the prompt engineering process, where additional information is added to the prompt to help the LLM perform its localization task.
This month’s paper out of Stanford, Yale, and USC, presents a retrieval system, LocAgent, based on a knowledge graph that is essentially the union of file/directory hierarchy on disk (contain, import edges) and class/function relationships based on the elaborated syntax tree for the code (call, inherit edges). Traversing edges in this graph can traverse files and directories as well as function call paths in the code.
The authors also contribute a new benchmark suite for code localization, LocBench. This suite is based on code changes to Python repositories in Github dated October 2024 or later, which is after the training date for all the LLMs benchmarked in the paper. It also has a balance of code changes between bug fixes, new feature requests, and performance issues (around 240, 150, and 140 cases respectively).
The authors benchmark LocAgent vs. other leading code localizers using LocBench, measuring each localizer’s ability to report the correct set of files/modules/functions. Using Claude-3.5-Sonnet, and compared OpenHands, the best alternate tool benchmarked, LocBench scores 6% higher in its ability to report the correct files in its top-10, and about 2% better at reporting the correct methods and functions in its top-10. A solid contribution in probably one of the most actively researched topics in agentic AI today. It’s worth noting however that the absolute top 10 score on function localization is around 60% which is still low, so huge room still for improvement.
As a final contribution, the authors also present a fine-tuned mini-LLM (7B parameter Qwen model) based on around 800 training cases taken from GitHub prior to October 2024 and show that this works reasonably well with LocAgent on LocBench, with about 8% lower top-10 scores. Its inference cost is 15x lower than using Claude-3.5-Sonnet (5 cents vs. 79 cents). An interesting datapoint on cost-outcome trade-off.
Raúl’s view
Where in a code base do you fix a bug, add a feature or address security or performance improvements? This month’s paper proposes LOCAGENT, a framework for code localization — the task of identifying the precise files, classes, or functions in a codebase that need to be modified for a fix. LOCAGENT combines 1) A unified graph representation of an entire code structure and dependencies with four node types (directory, file, class and function) and four relation types (contain, import, invoke and inherit). It is lightweight (takes seconds to build), enables powerful multi-hop reasoning, unifies content search with structural traversal, and is explicitly optimized for LLM consumption. 2) Only three highly optimized tools for search and traversal (SearchEntity, TraverseGraph and RetrieveEntity). 3) Fine-tuned open-source LLMs (Qwen-2.5-Coder 7B and 32B) for localization.
The authors claim that existing benchmarks have two key problems: potential training contamination (LLMs may have seen these repos/issues during pretraining), and overfocus on bug reports, lacking coverage of feature requests, security issues, and performance problems. So, they created a new benchmark, LOC-BENCH collected from modern post-2024 repos to avoid contamination, which includes 560 issues across 4 categories (bugs, features, performance, security), curated to ensure issues require realistic localization. The authors also use SWE-Bench-Lite (300 cases) which was created primarily to evaluate end-to-end bug-fixing capabilities (localization being only an intermediate step) to evaluate their approach.
The experimental evaluation of LOCAGENT includes several dimensions, below my attempt to summarize them:
Using fine-tuned open source LLMs (Qwen 2.5) achieves performance comparable to Claude-3.5 at ~86% lower cost.
LOCAGENT is correct @ one of the top 1 to 10 predictions in locating file, module and function between 72-94%, outperforming all other methods: embedding-based 40-85%, procedure-based 55-80% and agent-based 46-90%.
Cost efficiency is evaluated only against agent-based methods (best of the rest), and LOCAGENT using Qwen beats the best (Moatless using Claude) by .09$ to .46$ for accuracy @ 10 predictions.
Evaluating the tools using LOC-BENCH shows LOCAGENT being superior to other agent-based methods by small margins (5-10%, with the exception being security enhancements up to 20%).
Code localization is an important but underemphasized task relative to bug fixing or code generation. The paper convincingly argues that localization is distinct from retrieval, more complex and often the bottleneck in program repair. The approach using a heterogeneous graph, agent tools and fine-tuned LLMs is somewhat novel and empirically effective. LOC-BENCH is well motivated and the breath of empirical analysis and the results are strong. Evaluation is limited to Python, no evidence is provided on non-Python languages, and general applicability is asserted but not demonstrated. Some baselines (SWE-Agent, OpenHands) are not primarily localization systems, so comparisons may overstate LOCAGENT’s advantages.
This is a strong paper with contributions to code localization and agentic software engineering. It is technically sound, well-evaluated, and practically useful. The combination of a unified dependency graph, carefully designed agent tools, and efficient fine-tuning forms an approach that advances the state of the art.
Artificial intelligence (AI) is reshaping industries worldwide. Consumer-grade AI solutions are getting significant attention in the media for their creativity, speed, and accessibility—from ChatGPT and Meta’s AI app to Gemini for image creation, Sora for video, Sona for music, and Perplexity for web search.
However, adapting these impressive models for high-stakes engineering applications, such as semiconductor chip design, manufacturing, and robotics, is much more complex. In these fields, model results that are incorrect, fabricated (hallucinations), or inconsistent are unacceptable. In consumer AI, a mistake might lead to a funny answer. In chip design, it can cost millions during tapeout and manufacturing. That’s why the EDA industry needs a more industrial-grade AI approach.
Consumer-grade AI versus industrial-grade AI
To understand this challenge, let’s first define the key characteristics of consumer-grade AI and see how they differ from the requirements for industrial-grade AI.
Consumer-grade AI is often optimized for:
Creativity: Prioritizing the generation of novel ideas, text, and imagery, even when the results are not perfectly factual or precise.
Mobile support: Emphasizing access and ease of use on smartphones and other portable devices.
User-specificity and personalization: Adapting its style, recommendations, and memory to an individual’s personal history and stated preferences.
Shareability: Integrating tools to quickly post, link, or export generated content to social media or messaging platforms.
Voice mode: Enabling hands-free operation through spoken commands and audio responses for maximum convenience.
These principles are fundamentally different from the characteristics required for industrial-grade AI, which are based on the following:
Accuracy: Ensuring all outputs are quantitatively correct and conform to strict physical laws and engineering constraints, where even a tiny error can be critical.
Verifiability: Providing transparent, traceable decision-making paths so engineers can audit precisely how and why the AI arrived at a specific result.
Robustness: Maintaining high performance, reliability, and consistency even when faced with novel, noisy, or incomplete data sets.
Generalizability: Successfully applying insights and models trained in one design problem to new, unseen engineering problems.
Usability: Seamless integration with established computer-aided design (CAD) and computer-aided engineering (CAE) software tools and engineering workflows, rather than functioning as a separate, standalone app, while also not requiring extensive training for the engineers to utilize these AI solutions.
Figure 1. Consumer-grade AI is different in important ways from industrial-grade AI.
AI for the high stakes of chip design
Now that we understand the key differences between the two paradigms, let’s explore why industrial-grade AI is necessary for the electronic design automation (EDA) tools that power chip design.
Firstly, accuracy is paramount. Every step in the chip design process, from the initial schematic to the final tapeout, demands absolute precision. A single error could substantially harm chip production or critical industrial processes, resulting in significant financial and operational losses in wasted manufacturing costs, complete chip failure, or costly product recalls. That’s high stakes risk the industry simply must avoid.
Secondly, robustness and reproducibility are critical. Today’s general-purpose LLMs are probabilistic models, meaning they may not guarantee the exact same output every time. This variability is problematic for engineering. If a general-purpose LLM is used for a precise task such as RTL generation or high-level synthesis (HLS), it might struggle to achieve complete reproducibility. This could make it difficult to replicate a specific design block or apply the same IP block consistently in a new chip design, creating significant challenges for verification and manufacturing.
Thirdly, verifiability and traceability are essential. Engineers can’t rely on a “black box” that just gives an answer. They need to understand how the AI made its decisions. For example, during placement and routing, an AI might analyze thousands of potential layouts. A verifiable system would log into these different options and the trade-offs associated with them. This allows the chip designer to trace back and see why one layout achieved better power, performance and area (PPA) than another, enabling them to trust and validate the final design.
Examples of AI in EDA
A clear example of these industrial-grade principles in action is seen in Siemens’ Solido Design Environment software. It uses Adaptive and Additive AI technologies to validate designs and IPs through Monte Carlo simulations for mixed-signal designs and custom ICs. This provides orders-of-magnitude speedup for complex tasks, such as variation-aware analysis. These technologies use local machine learning models to predict the results of intensive SPICE simulations from a few initial full-fidelity runs. However, it doesn’t just guess blindly. It constantly checks its own predictions against a confidence threshold, providing SPICE-accurate results. If a prediction falls outside this safe margin, the system automatically reverts to running a full-fidelity SPICE simulation to ensure correctness. This clever hybrid approach perfectly demonstrates the industrial-grade principles:
It is accurate because it guarantees all results fall within the user-specified threshold.
It is verifiable because it self-checks every single prediction for accuracy.
It is robust because this trusted method can be reliably reused across different simulation conditions.
Another example is the recently launched Aprisa AI solution. AI design explorer, a major technology in Aprisa AI, uses machine learning and reinforcement learning algorithms to assist at all major stages of digital implementation and optimize workflows for optimal PPA results. Aprisa AI explores different flows within a targeted design space at each stage, taking into consideration the type of design and the designer’s chosen metrics. Aprisa AI makes decisions automatically on which paths to continue forward to the next stages of exploration, until it arrives to a full flow, and does so utilizing compute core resources more efficiently. While the agent can be launched and automatically make all the decisions to arrive at the best flow solution, Aprisa AI provides verifiability and flexibility to the designers. All databases at each stage are saved for user inspection and interaction with the data and logs. Aprisa AI design explorer also provides a dashboard with all the results of the explorations, allowing the designer to view all the metrics and examine why one approach might have a better PPA than another. Again, as in the above example, the principles of verifiability, robustness, ease of use, and generalizability remain true.
Leading the AI transformation of chip design
This journey for EDA AI is about more than just adopting consumer-grade AI; it is about adopting solutions that are accurate, robust, verifiable, usable, and generalizable. At Siemens EDA, we are committed to driving this transformation into chip design by developing solutions that engineers, managers, and executives can rely on for their most critical semiconductor designs. We believe the future of chip design won’t be built on generic chatbots, but by trusted, explainable, and industrial-grade AI partners fully integrated into every step of the semiconductor workflow. You can learn more about Siemens’ AI efforts here: EDA AI System | Siemens Software
About the author
Niranjan Sitapure, PhD, is the Central AI Product Manager at Siemens EDA. He oversees road mapping, development, strategic AI initiatives, and product marketing for the Siemens EDA AI portfolio. With a PhD in Engineering from Texas A&M University, Niranjan has honed his expertise in advanced AI/ML technologies, including time-series transformers, LLMs, and digital twins for engineering application. He can be reached at Niranjan.sitapure@siemens.com or on LinkedIn.
Mixel, Inc., a longtime leader in mixed-signal and MIPI® interface IP, entered a new chapter in its history following its acquisition by Silvaco Group, Inc., a global provider of design software and semiconductor IP. The acquisition, completed earlier in 2025, marks a strategic move that combines Silvaco’s deep expertise in EDA tools, TCAD software, and foundry enablement with Mixel’s broad portfolio of high-speed mixed-signal IP. Together, the companies are creating a powerful end-to-end offering for semiconductor design, from device modeling and simulation through to silicon-proven physical IP integration.
Founded in 1998 and headquartered in San Jose, California, Mixel built its reputation as one of the most trusted suppliers of mixed-signal IP, especially in the MIPI ecosystem. Its extensive portfolio includes MIPI D-PHY, C-PHY, and A-PHY interface IP, as well as other high-performance analog and mixed-signal components. These solutions are used across smartphones, automotive electronics, AI accelerators, and IoT systems. The company’s silicon-proven IP is deployed in hundreds of SoCs worldwide, with customers ranging from large semiconductor companies to innovative startups.
The integration into Silvaco enables Mixel’s technology to reach a broader customer base while expanding Silvaco’s growing semiconductor IP business. This acquisition aligns with Silvaco’s strategy to provide complete design enablement solutions that span from advanced simulation tools to ready-to-use IP blocks. By bringing Mixel into its portfolio, Silvaco strengthens its position in the connectivity and interface IP space — a critical area for next-generation chip design, especially as systems become more complex and bandwidth-hungry.
One of the most significant synergies comes from the complementary nature of the two companies’ offerings. Mixel’s expertise in silicon-proven, low-power PHY IPs perfectly complements Silvaco’s strength in modeling, circuit simulation, and process design kit (PDK) technology. Together, they can accelerate customers’ design cycles, reduce risk, and improve time-to-market for advanced SoCs and 3D ICs.
Since the acquisition, Mixel’s engineering and customer support teams have continued to operate from their San Jose base, ensuring continuity for existing customers and partners. The Mixel brand, known for reliability and interoperability across foundries and design ecosystems, remains intact under Silvaco’s ownership. Customers continue to benefit from Mixel’s long-standing partnerships with major foundries such as TSMC, Samsung, GlobalFoundries, and UMC, covering process nodes from 180nm to 3nm.
Mixel’s automotive and industrial design wins have also gained new strategic importance under Silvaco, as the combined company targets markets that demand long product lifecycles and high reliability. The MIPI A-PHY product line, in particular, positions Silvaco well to serve the rapidly growing automotive connectivity segment, which is essential for advanced driver-assistance systems (ADAS) and in-vehicle networking.
Looking ahead, Silvaco and Mixel are expected to focus on next-generation IP development supporting chiplet interconnects, AI-driven edge devices, and 3D system integration. By combining simulation, design, and IP under one roof, the new organization aims to simplify complex semiconductor development workflows and accelerate innovation across the industry.
Bottom line: With the acquisition, Mixel’s legacy of mixed-signal excellence continues—now strengthened by Silvaco’s global scale, EDA expertise, and vision for the future of semiconductor design. The union positions the combined company as a formidable player in the race to deliver faster, more efficient, and more connected silicon solutions.
Mixel is a leading provider of mixed-signal IPs and offers a wide portfolio of high-performance mixed-signal connectivity IP solutions. Mixel’s mixed-signal portfolio includes PHYs and SerDes, such as ASA Motion Link SerDes, MIPI® D-PHYTM, MIPI M-PHY®, MIPI C-PHYTM, LVDS, and many dual mode PHY supporting multiple standards. Mixel was founded in 1998 and is headquartered in San Jose, CA, with global operation to support a worldwide customer base. For more information contact Mixel at info@mixel.com or visit www.mixel.com. You can also follow Mixel on LinkedIn, Twitter, Facebook, or YouTube.
The semiconductor industry faces an unprecedented crisis that threatens the very foundation of technological innovation. According to the latest Siemens EDA / Wilson Research Study, first-silicon success rates have plummeted to just 14%[1]—the lowest figure in more than twenty years of tracking this data. This isn’t merely a statistical anomaly; it represents a fundamental breakdown in our ability to deliver working silicon on schedule. Re-spins can range in costs depending on node-size and type of fix can range from $15M+ at 7nm to >$100M at 3nm for a full re-spin.[2]
The crisis deepens as the industry pushes toward 2nm process nodes, where the complexity of testing and ensuring manufacturability increases exponentially. Advanced node designs demand unprecedented compute and memory resources for verification workflows, making traditional on-premises infrastructure increasingly inadequate.
As recent industry analysis has highlighted, addressing this crisis requires robust data infrastructure that enables mobility, security, and availability—the foundational pillars for next-generation verification approaches. The question isn’t whether to move to the cloud—it’s how to architect the complete ecosystem that makes AI-enhanced, 2nm-capable verification possible.
Beyond Data Infrastructure – The Complete Cloud Ecosystem
AWS and NetApp together deliver the complete ecosystem demanded by 2nm-era semiconductor development. While NetApp’s FSx for NetApp ONTAP provides the high-performance, globally accessible storage foundation with FlexCache technology for seamless data mobility and FlexClone capabilities for instant environment provisioning, AWS contributes the elastic compute, advanced networking, and AI/ML services that transform how verification workflows operate at scale.
This partnership addresses the memory-intensive reality of advanced node verification. As semiconductor devices increase in density and complexity, physical verification requires compute nodes with increasingly high memory-to-core ratios and larger numbers of high-performance cores. Traditional on-premises infrastructure cannot economically provide the burst capacity that advanced verification workflows demand.
The complete transformation extends far beyond “bigger HPC jobs” to encompass four integrated capabilities:
Elastic Resource Scaling that eliminates capital expenditure constraints,
Accelerated Modernization with access to the latest AMD and Intel-based instances optimized for EDA workloads,
Global Collaboration through secure chambers built on AWS infrastructure with NetApp’s global data fabric, and
Compressed Feedback Cycles via real-time analytics dashboards.
Modern EDA workflows require seamless integration across multiple tools, massive parallel-processing capabilities and the ability to handle the petabyte-scale datasets. Cloud environments provide the infrastructure elasticity to scale from hundreds to thousands of cores within minutes, enabling verification teams to meet aggressive project timelines without the months-long hardware procurement cycles that plague on-premises deployments.
The Analytics Foundation for AI Optimization
Cloud environments enable comprehensive data collection from verification runs, resource utilization patterns, coverage metrics, and performance benchmarks—creating rich datasets for analytics and optimization. This data foundation becomes the cornerstone for implementing AI-driven optimization that can fundamentally transform verification efficiency.
Real-Time Analytics Layer: Interactive dashboards provide immediate visibility into verification progress, bottleneck identification, and resource efficiency metrics. Teams can track coverage analysis, debug cycle times, and project completion status in real-time, enabling rapid course corrections with near immediate feedback loops.
AI-Driven Optimization Potential: With robust analytics foundations in place, AI systems can analyze verification patterns to optimize resource allocation, predict potential bottlenecks, and suggest configuration improvements. This creates opportunities for continuous improvement loops that learn from each verification cycle, identifying optimal tool configurations, predicting resource needs based on design complexity, and automatically adjusting compute allocation to minimize both time and cost.
Data-Driven Decision Making: The comprehensive analytics enable engineering teams to make informed decisions about resource allocation, tool selection, and verification strategies based on actual performance data rather than estimates. For instance, determining the value of specific regression tests requires the ability to measure not only improvement in coverage or bugs identified, but also the cost of running that test.
Figure 2: Verification Quality Metrics and Coverage AnalysisFigure 3: Resource Utilization and Cost Optimization
Advanced analytics can correlate design characteristics with verification resource requirements, enabling predictive capacity planning and automated scaling decisions. This intelligence layer transforms reactive verification management into proactive optimization, directly addressing the root causes behind the industry’s 14% first-silicon success challenge.
Security and IP Protection – Enterprise-Grade Implementation
The semiconductor industry’s IP protection concerns are addressed through enterprise-grade security implementations that often exceed on-premises capabilities. AWS provides hardware root-of-trust, comprehensive compliance certifications, and secure collaboration chambers enabling distributed teams, IP partners, and foundries to work together while maintaining strict access controls. AWS is compliant with ISO/IEC 270001, ITAR, AICPA SOC 2. Please refer to AWS documentation for a full list compliance programs.
NetApp’s FSx for ONTAP enhances security through FlexCache technology that enables global data access without compromising IP boundaries, and FlexClones that provide instant, isolated environment provisioning for different verification runs. These capabilities ensure that sensitive design data remains protected while enabling the collaboration essential for complex SoC development.
The security architecture implements zero-trust principles with granular access controls, encrypted data transmission, and comprehensive audit trails. Multi-tenant isolation ensures that even within shared cloud infrastructure, each project maintains complete data separation and access control.
Advanced threat detection and automated response capabilities provide continuous monitoring for potential security incidents. This comprehensive security framework often provides superior protection compared to traditional on-premises environments, where security updates and monitoring may lag behind current threat landscapes.
The Path Forward – Measurable Transformation
The semiconductor industry stands at an inflection point. The 14% first-silicon success rate represents a fundamental challenge that demands transformation. Companies that embrace the complete AWS and NetApp ecosystem gain access to elastic scaling that handles advanced verification complexity, analytics foundations that enable data-driven optimization, and security implementations that protect valuable IP.
Implementation Roadmap: Organizations can begin their transformation with pilot projects that demonstrate immediate value while building confidence in cloud-native approaches. The migration typically follows a phased approach: assessment and planning, pilot implementation, gradual workload migration, and full ecosystem optimization.
Figure 4: Cloud is a natural fit for EDA
ROI Considerations: Early adopters report significant improvements in verification cycle times, resource utilization efficiency, and team collaboration effectiveness. The transformation addresses core industry challenges: verification complexity at advanced nodes, resource allocation inefficiencies, collaboration barriers across global teams, and the need for faster feedback cycles. By accelerating innovation rates and improving on-schedule metrics with AWS, these early adopters are earning more design wins and seeing significant growth in both revenue and profits.
Future Outlook: As verification requirements continue to grow with advanced process nodes, cloud-native EDA workflows provide the foundation for addressing the industry’s silicon success challenges. The integration of AI-driven optimization with comprehensive analytics creates a continuous improvement cycle that becomes more effective over time.
The future belongs to companies that recognize cloud transformation as the foundation for next-generation semiconductor development that can address the industry’s fundamental verification challenges. Success in the 2nm era and beyond requires not just better tools, but completely reimagined workflows leveraging the full potential of cloud-native architectures.
Disclaimer:
The views and opinions expressed on this blog are solely those of the author(s) and do not represent the views or positions of any employer, organization, or entity with which the author is or has been affiliated. This blog is a personal platform, and all content is shared in the author’s individual capacity.
Authors:
Nikhil Sharma is a Solutions Architecture Leader at Amazon Web Services (AWS), where he and his team of Solutions Architects help customers solve critical business challenges using AWS cloud technologies and services. With 25+ years of industry experience, Nikhil specializes in enterprise architecture and innovation. He is passionate about helping organizations leverage cloud technology to drive business outcomes.
Sunghwan Son is a Senior Solutions Architect at Amazon Web Services (AWS) who brings 17 years of distinguished experience in semiconductor design and cloud computing technologies. He specializes in optimizing Electronic Design Automation (EDA) infrastructure and developing innovative cloud solutions for enterprise customers.
Paul Mantey, FSxN Sales Specialist – High Tech, EDA & Semiconductors, NetApp, Inc. Leads a team focused on enabling builders and developers across the High-Tech, EDA and Semiconductor Development segments. Prior to joining NetApp, Paul worked 13 years at Hewlett-Packard holding various roles in Design Engineering, Enterprise Architecture, Virtualization Program Management, and Product Development. His thirteen patents represent extensive contributions to systems architecture, integration testing, and management hardware design.
There is always a lot of buzz about advanced AI workloads at trade shows. How to train them and how to run them. Advanced chip and multi-die designs are how AI is brought to life, so it was a perfect fit for discussion at a show. But there is another side of this discussion. Much of the work going on in AI workloads has to do with processing data acquired from the real world around us. There are massive sensor networks everywhere acquiring all kind of information. The critical link for all this is data conversion – converting the analog signals from sensor networks into digital information that can be processed by AI workloads. Getting this part right is critical and it’s not easy to do.
That’s why a recent presentation from Alphacore caught my eye. The company was showing the video of a real test run of its latest analog-to-digital converter IP in the lab, and the results were quite impressive. Here’s what I found in my tour of advanced data conversion with Alphacore.
About Alphacore
Alphacore is a company that focuses on analog, mixed signal and RF solutions. Some of its primary businesses include:
High performance and low power analog, mixed signal, and RF electronics, including advanced analog-to-digital converters
High-speed visible light and infrared readout ICs (ROICs) and full camera systems
Robust power management ICs (PMICs) for space and high-energy physics experiments
Innovative devices ensuring supply chain and IoT cybersecurity
The company also provides hardened versions of many of its products for harsh environments, including radiation and extreme temperatures. You can learn more about this unique company on SemiWiki here.
The Data Converter Demo
Alphacore presented the lab video of its A12B9G-GF22 at a recent event. The part is a low-power, high-speed analog to digital converter (ADC) intellectual property design block fabricated in the GlobalFoundries 22nm FDSOI process. A critical challenge for any ADC is delivering high accuracy results at a high data rate while consuming as little power as possible. The Alphacore IP block excels in all these areas, and the video demonstration was proof of that.
The demo configuration is shown in the photo at the top of this post. Below is a version that illustrates what equipment was used.
Demo setup
Alphacore showcased some very impressive results with this demo. I can tell you from first-hand experience that capturing real-time results of a precision analog demonstration like this can be quite challenging. The smallest issue can ruin the whole flow. The robust behavior and excellent results shown say a lot about the quality of this ADC.
The video showcased 9 giga samples per second with 12-bit resolution using under 100 milliwatts of power. Those are impressive results. Regarding accuracy and stability there is a parameter called spurious-free dynamic range, or SFDR. It is a measure of the ratio between the level of a desired input signal and the level of the largest distortion component in the signal’s spectrum, typically represented in decibels or dB. This is a key figure of merit for ADCs to show the circuit’s ability to distinguish the signal from noise and distortion.
To cover this part, a series of measurements were run with varying input frequencies. The resulting FFT plots reveal some excellent results as shown below.
Demo results
The team explained that Alphacore’s ADC technology can be used at every design stage, from transistor-level component IP blocks to complete ASIC and SoC development as illustrated in the figure below.
To Learn More
If you have a need to connect measured data to digital algorithms, you should definitely learn more about what Alphacore has to offer. To begin, you can access a Product Brief for the A12B9G-GF22 here. And that summarizes a tour of advanced data conversion with Alphacore.