100X800 Banner (1)

Why chip design needs industrial-grade EDA AI

Why chip design needs industrial-grade EDA AI
by Admin on 11-25-2025 at 10:00 am

EDA AI consumer vs industrial 72dpi

By Niranjan Sitapure

Artificial intelligence (AI) is reshaping industries worldwide. Consumer-grade AI solutions are getting significant attention in the media for their creativity, speed, and accessibility—from ChatGPT and Meta’s AI app to Gemini for image creation, Sora for video, Sona for music, and Perplexity for web search.

However, adapting these impressive models for high-stakes engineering applications, such as semiconductor chip design, manufacturing, and robotics, is much more complex. In these fields, model results that are incorrect, fabricated (hallucinations), or inconsistent are unacceptable. In consumer AI, a mistake might lead to a funny answer. In chip design, it can cost millions during tapeout and manufacturing. That’s why the EDA industry needs a more industrial-grade AI approach.

Consumer-grade AI versus industrial-grade AI

To understand this challenge, let’s first define the key characteristics of consumer-grade AI and see how they differ from the requirements for industrial-grade AI.

Consumer-grade AI is often optimized for:

  • Creativity: Prioritizing the generation of novel ideas, text, and imagery, even when the results are not perfectly factual or precise.
  • Mobile support: Emphasizing access and ease of use on smartphones and other portable devices.
  • User-specificity and personalization: Adapting its style, recommendations, and memory to an individual’s personal history and stated preferences.
  • Shareability: Integrating tools to quickly post, link, or export generated content to social media or messaging platforms.
  • Voice mode: Enabling hands-free operation through spoken commands and audio responses for maximum convenience.

These principles are fundamentally different from the characteristics required for industrial-grade AI, which are based on the following:

  • Accuracy: Ensuring all outputs are quantitatively correct and conform to strict physical laws and engineering constraints, where even a tiny error can be critical.
  • Verifiability: Providing transparent, traceable decision-making paths so engineers can audit precisely how and why the AI arrived at a specific result.
  • Robustness: Maintaining high performance, reliability, and consistency even when faced with novel, noisy, or incomplete data sets.
  • Generalizability: Successfully applying insights and models trained in one design problem to new, unseen engineering problems.
  • Usability: Seamless integration with established computer-aided design (CAD) and computer-aided engineering (CAE) software tools and engineering workflows, rather than functioning as a separate, standalone app, while also not requiring extensive training for the engineers to utilize these AI solutions.
Figure 1. Consumer-grade AI is different in important ways from industrial-grade AI.

AI for the high stakes of chip design

Now that we understand the key differences between the two paradigms, let’s explore why industrial-grade AI is necessary for the electronic design automation (EDA) tools that power chip design.

Firstly, accuracy is paramount. Every step in the chip design process, from the initial schematic to the final tapeout, demands absolute precision. A single error could substantially harm chip production or critical industrial processes, resulting in significant financial and operational losses in wasted manufacturing costs, complete chip failure, or costly product recalls. That’s high stakes risk the industry simply must avoid.

Secondly, robustness and reproducibility are critical. Today’s general-purpose LLMs are probabilistic models, meaning they may not guarantee the exact same output every time. This variability is problematic for engineering. If a general-purpose LLM is used for a precise task such as RTL generation or high-level synthesis (HLS), it might struggle to achieve complete reproducibility. This could make it difficult to replicate a specific design block or apply the same IP block consistently in a new chip design, creating significant challenges for verification and manufacturing.

Thirdly, verifiability and traceability are essential. Engineers can’t rely on a “black box” that just gives an answer. They need to understand how the AI made its decisions. For example, during placement and routing, an AI might analyze thousands of potential layouts. A verifiable system would log into these different options and the trade-offs associated with them. This allows the chip designer to trace back and see why one layout achieved better power, performance and area (PPA) than another, enabling them to trust and validate the final design.

Examples of AI in EDA

A clear example of these industrial-grade principles in action is seen in Siemens’ Solido Design Environment software. It uses Adaptive and Additive AI technologies to validate designs and IPs through Monte Carlo simulations for mixed-signal designs and custom ICs. This provides orders-of-magnitude speedup for complex tasks, such as variation-aware analysis. These technologies use local machine learning models to predict the results of intensive SPICE simulations from a few initial full-fidelity runs. However, it doesn’t just guess blindly. It constantly checks its own predictions against a confidence threshold, providing SPICE-accurate results. If a prediction falls outside this safe margin, the system automatically reverts to running a full-fidelity SPICE simulation to ensure correctness. This clever hybrid approach perfectly demonstrates the industrial-grade principles:

  • It is accurate because it guarantees all results fall within the user-specified threshold.
  • It is verifiable because it self-checks every single prediction for accuracy.
  • It is robust because this trusted method can be reliably reused across different simulation conditions.

Another example is the recently launched Aprisa AI solution. AI design explorer, a major technology in Aprisa AI, uses machine learning and reinforcement learning algorithms to assist at all major stages of digital implementation and optimize workflows for optimal PPA results. Aprisa AI explores different flows within a targeted design space at each stage, taking into consideration the type of design and the designer’s chosen metrics. Aprisa AI makes decisions automatically on which paths to continue forward to the next stages of exploration, until it arrives to a full flow, and does so utilizing compute core resources more efficiently. While the agent can be launched and automatically make all the decisions to arrive at the best flow solution, Aprisa AI provides verifiability and flexibility to the designers. All databases at each stage are saved for user inspection and interaction with the data and logs. Aprisa AI design explorer also provides a dashboard with all the results of the explorations, allowing the designer to view all the metrics and examine why one approach might have a better PPA than another. Again, as in the above example, the principles of verifiability, robustness, ease of use, and generalizability remain true.

Leading the AI transformation of chip design

This journey for EDA AI is about more than just adopting consumer-grade AI; it is about adopting solutions that are accurate, robust, verifiable, usable, and generalizable. At Siemens EDA, we are committed to driving this transformation into chip design by developing solutions that engineers, managers, and executives can rely on for their most critical semiconductor designs. We believe the future of chip design won’t be built on generic chatbots, but by trusted, explainable, and industrial-grade AI partners fully integrated into every step of the semiconductor workflow. You can learn more about Siemens’ AI efforts here: EDA AI System | Siemens Software

About the author

Niranjan Sitapure, PhD, is the Central AI Product Manager at Siemens EDA. He oversees road mapping, development, strategic AI initiatives, and product marketing for the Siemens EDA AI portfolio. With a PhD in Engineering from Texas A&M University, Niranjan has honed his expertise in advanced AI/ML technologies, including time-series transformers, LLMs, and digital twins for engineering application. He can be reached at Niranjan.sitapure@siemens.com or on LinkedIn.

Also Read:

Hierarchically defining bump and pin regions overcomes 3D IC complexity

CDC Verification for Safety-Critical Designs – What You Need to Know

A Compelling Differentiator in OEM Product Design


Mixel Company Update 2025

Mixel Company Update 2025
by Daniel Nenni on 11-25-2025 at 6:00 am

Mixel Update 2025 SemiWiki

Mixel, Inc., a longtime leader in mixed-signal and MIPI® interface IP, entered a new chapter in its history following its acquisition by Silvaco Group, Inc., a global provider of design software and semiconductor IP. The acquisition, completed earlier in 2025, marks a strategic move that combines Silvaco’s deep expertise in EDA tools, TCAD software, and foundry enablement with Mixel’s broad portfolio of high-speed mixed-signal IP. Together, the companies are creating a powerful end-to-end offering for semiconductor design, from device modeling and simulation through to silicon-proven physical IP integration.

Founded in 1998 and headquartered in San Jose, California, Mixel built its reputation as one of the most trusted suppliers of mixed-signal IP, especially in the MIPI ecosystem. Its extensive portfolio includes MIPI D-PHY, C-PHY, and A-PHY interface IP, as well as other high-performance analog and mixed-signal components. These solutions are used across smartphones, automotive electronics, AI accelerators, and IoT systems. The company’s silicon-proven IP is deployed in hundreds of SoCs worldwide, with customers ranging from large semiconductor companies to innovative startups.

The integration into Silvaco enables Mixel’s technology to reach a broader customer base while expanding Silvaco’s growing semiconductor IP business. This acquisition aligns with Silvaco’s strategy to provide complete design enablement solutions that span from advanced simulation tools to ready-to-use IP blocks. By bringing Mixel into its portfolio, Silvaco strengthens its position in the connectivity and interface IP space — a critical area for next-generation chip design, especially as systems become more complex and bandwidth-hungry.

One of the most significant synergies comes from the complementary nature of the two companies’ offerings. Mixel’s expertise in silicon-proven, low-power PHY IPs perfectly complements Silvaco’s strength in modeling, circuit simulation, and process design kit (PDK) technology. Together, they can accelerate customers’ design cycles, reduce risk, and improve time-to-market for advanced SoCs and 3D ICs.

Since the acquisition, Mixel’s engineering and customer support teams have continued to operate from their San Jose base, ensuring continuity for existing customers and partners. The Mixel brand, known for reliability and interoperability across foundries and design ecosystems, remains intact under Silvaco’s ownership. Customers continue to benefit from Mixel’s long-standing partnerships with major foundries such as TSMC, Samsung, GlobalFoundries, and UMC, covering process nodes from 180nm to 3nm.

Mixel’s automotive and industrial design wins have also gained new strategic importance under Silvaco, as the combined company targets markets that demand long product lifecycles and high reliability. The MIPI A-PHY product line, in particular, positions Silvaco well to serve the rapidly growing automotive connectivity segment, which is essential for advanced driver-assistance systems (ADAS) and in-vehicle networking.

Looking ahead, Silvaco and Mixel are expected to focus on next-generation IP development supporting chiplet interconnects, AI-driven edge devices, and 3D system integration. By combining simulation, design, and IP under one roof, the new organization aims to simplify complex semiconductor development workflows and accelerate innovation across the industry.

Bottom line: With the acquisition, Mixel’s legacy of mixed-signal excellence continues—now strengthened by Silvaco’s global scale, EDA expertise, and vision for the future of semiconductor design. The union positions the combined company as a formidable player in the race to deliver faster, more efficient, and more connected silicon solutions.

Contact Mixel

About Mixel:

Mixel is a leading provider of mixed-signal IPs and offers a wide portfolio of high-performance mixed-signal connectivity IP solutions. Mixel’s mixed-signal portfolio includes PHYs and SerDes, such as ASA Motion Link SerDes, MIPI® D-PHYTMMIPI M-PHY®MIPI C-PHYTMLVDS, and many dual mode PHY supporting multiple standards. Mixel was founded in 1998 and is headquartered in San Jose, CA, with global operation to support a worldwide customer base. For more information contact Mixel at info@mixel.com or visit www.mixel.com. You can also follow Mixel on LinkedInTwitterFacebook, or YouTube.

Also Read:

Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY

Mixel at the 2025 Design Automation Conference #62DAC

2025 Outlook with Justin Endo of Mixel


Cloud-Accelerated EDA Development

Cloud-Accelerated EDA Development
by Admin on 11-24-2025 at 10:00 am

Figure 4

By Nikhil Sharma, Sunghwan Son, Paul Mantey

The semiconductor industry faces an unprecedented crisis that threatens the very foundation of technological innovation. According to the latest Siemens EDA / Wilson Research Study, first-silicon success rates have plummeted to just 14%[1]—the lowest figure in more than twenty years of tracking this data. This isn’t merely a statistical anomaly; it represents a fundamental breakdown in our ability to deliver working silicon on schedule.  Re-spins can range in costs depending on node-size and type of fix can range from $15M+ at 7nm to >$100M at 3nm for a full re-spin.[2]

The crisis deepens as the industry pushes toward 2nm process nodes, where the complexity of testing and ensuring manufacturability increases exponentially. Advanced node designs demand unprecedented compute and memory resources for verification workflows, making traditional on-premises infrastructure increasingly inadequate.

As recent industry analysis has highlighted, addressing this crisis requires robust data infrastructure that enables mobility, security, and availability—the foundational pillars for next-generation verification approaches. The question isn’t whether to move to the cloud—it’s how to architect the complete ecosystem that makes AI-enhanced, 2nm-capable verification possible.

Beyond Data Infrastructure – The Complete Cloud Ecosystem

AWS and NetApp together deliver the complete ecosystem demanded by 2nm-era semiconductor development.  While NetApp’s FSx for NetApp ONTAP provides the high-performance, globally accessible storage foundation with FlexCache technology for seamless data mobility and FlexClone capabilities for instant environment provisioning, AWS contributes the elastic compute, advanced networking, and AI/ML services that transform how verification workflows operate at scale.

This partnership addresses the memory-intensive reality of advanced node verification.  As semiconductor devices increase in density and complexity, physical verification requires compute nodes with increasingly high memory-to-core ratios and larger numbers of high-performance cores. Traditional on-premises infrastructure cannot economically provide the burst capacity that advanced verification workflows demand.

The complete transformation extends far beyond “bigger HPC jobs” to encompass four integrated capabilities:

  1. Elastic Resource Scaling that eliminates capital expenditure constraints,
  2. Accelerated Modernization with access to the latest AMD and Intel-based instances optimized for EDA workloads,
  3. Global Collaboration through secure chambers built on AWS infrastructure with NetApp’s global data fabric, and
  4. Compressed Feedback Cycles via real-time analytics dashboards.

Modern EDA workflows require seamless integration across multiple tools, massive parallel-processing capabilities and the ability to handle the petabyte-scale datasets.  Cloud environments provide the infrastructure elasticity to scale from hundreds to thousands of cores within minutes, enabling verification teams to meet aggressive project timelines without the months-long hardware procurement cycles that plague on-premises deployments.

The Analytics Foundation for AI Optimization

Cloud environments enable comprehensive data collection from verification runs, resource utilization patterns, coverage metrics, and performance benchmarks—creating rich datasets for analytics and optimization. This data foundation becomes the cornerstone for implementing AI-driven optimization that can fundamentally transform verification efficiency.

Real-Time Analytics Layer: Interactive dashboards provide immediate visibility into verification progress, bottleneck identification, and resource efficiency metrics. Teams can track coverage analysis, debug cycle times, and project completion status in real-time, enabling rapid course corrections with near immediate feedback loops.

Figure 1: Real-Time Verification Analytics Dashboard

AI-Driven Optimization Potential: With robust analytics foundations in place, AI systems can analyze verification patterns to optimize resource allocation, predict potential bottlenecks, and suggest configuration improvements. This creates opportunities for continuous improvement loops that learn from each verification cycle, identifying optimal tool configurations, predicting resource needs based on design complexity, and automatically adjusting compute allocation to minimize both time and cost.

Data-Driven Decision Making: The comprehensive analytics enable engineering teams to make informed decisions about resource allocation, tool selection, and verification strategies based on actual performance data rather than estimates.  For instance, determining the value of specific regression tests requires the ability to measure not only improvement in coverage or bugs identified, but also the cost of running that test.

Figure 2: Verification Quality Metrics and Coverage Analysis
Figure 3: Resource Utilization and Cost Optimization

Advanced analytics can correlate design characteristics with verification resource requirements, enabling predictive capacity planning and automated scaling decisions.  This intelligence layer transforms reactive verification management into proactive optimization, directly addressing the root causes behind the industry’s 14% first-silicon success challenge.

Security and IP Protection – Enterprise-Grade Implementation

The semiconductor industry’s IP protection concerns are addressed through enterprise-grade security implementations that often exceed on-premises capabilities. AWS provides hardware root-of-trust, comprehensive compliance certifications, and secure collaboration chambers enabling distributed teams, IP partners, and foundries to work together while maintaining strict access controls. AWS is compliant with ISO/IEC 270001, ITAR, AICPA SOC 2. Please refer to AWS documentation for a full list compliance programs.

NetApp’s FSx for ONTAP enhances security through FlexCache technology that enables global data access without compromising IP boundaries, and FlexClones that provide instant, isolated environment provisioning for different verification runs. These capabilities ensure that sensitive design data remains protected while enabling the collaboration essential for complex SoC development.

The security architecture implements zero-trust principles with granular access controls, encrypted data transmission, and comprehensive audit trails. Multi-tenant isolation ensures that even within shared cloud infrastructure, each project maintains complete data separation and access control.

Advanced threat detection and automated response capabilities provide continuous monitoring for potential security incidents. This comprehensive security framework often provides superior protection compared to traditional on-premises environments, where security updates and monitoring may lag behind current threat landscapes.

The Path Forward – Measurable Transformation

The semiconductor industry stands at an inflection point. The 14% first-silicon success rate represents a fundamental challenge that demands transformation. Companies that embrace the complete AWS and NetApp ecosystem gain access to elastic scaling that handles advanced verification complexity, analytics foundations that enable data-driven optimization, and security implementations that protect valuable IP.

Implementation Roadmap: Organizations can begin their transformation with pilot projects that demonstrate immediate value while building confidence in cloud-native approaches. The migration typically follows a phased approach: assessment and planning, pilot implementation, gradual workload migration, and full ecosystem optimization.

Figure 4: Cloud is a natural fit for EDA

ROI Considerations: Early adopters report significant improvements in verification cycle times, resource utilization efficiency, and team collaboration effectiveness. The transformation addresses core industry challenges: verification complexity at advanced nodes, resource allocation inefficiencies, collaboration barriers across global teams, and the need for faster feedback cycles.  By accelerating innovation rates and improving on-schedule metrics with AWS, these early adopters are earning more design wins and seeing significant growth in both revenue and profits.

Future Outlook: As verification requirements continue to grow with advanced process nodes, cloud-native EDA workflows provide the foundation for addressing the industry’s silicon success challenges. The integration of AI-driven optimization with comprehensive analytics creates a continuous improvement cycle that becomes more effective over time.

The future belongs to companies that recognize cloud transformation as the foundation for next-generation semiconductor development that can address the industry’s fundamental verification challenges. Success in the 2nm era and beyond requires not just better tools, but completely reimagined workflows leveraging the full potential of cloud-native architectures.

Disclaimer:

The views and opinions expressed on this blog are solely those of the author(s) and do not represent the views or positions of any employer, organization, or entity with which the author is or has been affiliated. This blog is a personal platform, and all content is shared in the author’s individual capacity.

Authors:

Nikhil Sharma is a Solutions Architecture Leader at Amazon Web Services (AWS), where he and his team of Solutions Architects help customers solve critical business challenges using AWS cloud technologies and services. With 25+ years of industry experience, Nikhil specializes in enterprise architecture and innovation. He is passionate about helping organizations leverage cloud technology to drive business outcomes.

Sunghwan Son is a Senior Solutions Architect at Amazon Web Services (AWS) who brings 17 years of distinguished experience in semiconductor design and cloud computing technologies. He specializes in optimizing Electronic Design Automation (EDA) infrastructure and developing innovative cloud solutions for enterprise customers.

Paul Mantey, FSxN Sales Specialist – High Tech, EDA & Semiconductors, NetApp, Inc. Leads a team focused on enabling builders and developers across the High-Tech, EDA and Semiconductor Development segments.  Prior to joining NetApp, Paul worked 13 years at Hewlett-Packard holding various roles in Design Engineering, Enterprise Architecture, Virtualization Program Management, and Product Development.  His thirteen patents represent extensive contributions to systems architecture, integration testing, and management hardware design.

References:

[1] https://resources.sw.siemens.com/en-US/white-paper-2024-wilson-research-group-ic-asic-functional-verification-trend-report/

[2] https://www.allpcb.com/allelectrohub/chip-design-and-tapeout-key-processes-explained#:~:text=Tapeout%20costs%20and%20wafer%20pricing,-Mask%20costs%20and&text=Typical%20tapeout%20cost%20estimates%20by,nm%20may%20exceed%20$100%20million.

Also Read:

Semiconductors Up Over 20% in 2025

WEBINAR: Is Agentic AI the Future of EDA?

Live Webinar: Considerations When Architecting Your Next SoC: NoC with Arteris and Aion Silicon


A Tour of Advanced Data Conversion with Alphacore

A Tour of Advanced Data Conversion with Alphacore
by Mike Gianfagna on 11-24-2025 at 6:00 am

A Tour of Advanced Data Conversion with Alphacore

There is always a lot of buzz about advanced AI workloads at trade shows. How to train them and how to run them. Advanced chip and multi-die designs are how AI is brought to life, so it was a perfect fit for discussion at a show. But there is another side of this discussion. Much of the work going on in AI workloads has to do with processing data acquired from the real world around us. There are massive sensor networks everywhere acquiring all kind of information. The critical link for all this is data conversion – converting the analog signals from sensor networks into digital information that can be processed by AI workloads. Getting this part right is critical and it’s not easy to do.

That’s why a recent presentation from Alphacore caught my eye. The company was showing the video of a real test run of its latest analog-to-digital converter IP in the lab, and the results were quite impressive. Here’s what I found in my tour of advanced data conversion with Alphacore.

About Alphacore

Alphacore is a company that focuses on analog, mixed signal and RF solutions. Some of its primary businesses include:

  • High performance and low power analog, mixed signal, and RF electronics, including advanced analog-to-digital converters
  • High-speed visible light and infrared readout ICs (ROICs) and full camera systems
  • Robust power management ICs (PMICs) for space and high-energy physics experiments
  • Innovative devices ensuring supply chain and IoT cybersecurity

The company also provides hardened versions of many of its products for harsh environments, including radiation and extreme temperatures. You can learn more about this unique company on SemiWiki here.

The Data Converter Demo

Alphacore presented the lab video of its A12B9G-GF22 at a recent event. The part is a low-power, high-speed analog to digital converter (ADC) intellectual property design block fabricated in the GlobalFoundries 22nm FDSOI process. A critical challenge for any ADC is delivering high accuracy results at a high data rate while consuming as little power as possible. The Alphacore IP block excels in all these areas, and the video demonstration was proof of that.

The demo configuration is shown in the photo at the top of this post. Below is a version that illustrates what equipment was used.

Demo setup

Alphacore showcased some very impressive results with this demo. I can tell you from first-hand experience that capturing real-time results of a precision analog demonstration like this can be quite challenging. The smallest issue can ruin the whole flow. The robust behavior and excellent results shown say a lot about the quality of this ADC.

The video showcased 9 giga samples per second with 12-bit resolution using under 100 milliwatts of power. Those are impressive results. Regarding accuracy and stability there is a parameter called spurious-free dynamic range, or SFDR. It is a measure of the ratio between the level of a desired input signal and the level of the largest distortion component in the signal’s spectrum, typically represented in decibels or dB. This is a key figure of merit for ADCs to show the circuit’s ability to distinguish the signal from noise and distortion.

To cover this part, a series of measurements were run with varying input frequencies. The resulting FFT plots reveal some excellent results as shown below.

Demo results

The team explained that Alphacore’s ADC technology can be used at every design stage, from transistor-level component IP blocks to complete ASIC and SoC development as illustrated in the figure below.

To Learn More

If you have a need to connect measured data to digital algorithms, you should definitely learn more about what Alphacore has to offer. To begin, you can access a Product Brief for the A12B9G-GF22 here. And that summarizes a tour of advanced data conversion with Alphacore.

Also Read:

Alphacore at the 2024 Design Automation Conference

Analog to Digital Converter Circuits for Communications, AI and Automotive

High-speed, low-power, Hybrid ADC at IP-SoC


Video EP12: How Mach42 is Changing Analog Verification with Antun Domic

Video EP12: How Mach42 is Changing Analog Verification with Antun Domic
by Daniel Nenni on 11-21-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by Antun Domic, who discusses Mach42’s work on AI and analog verification. Antun covers many aspects of analog/AMS verification and how Mach42’s unique AI-fueled approach provides significant benefits. He explains the balance of speed vs. accuracy and how Mach42’s advanced AI processing creates highly efficient models.

Antun also discusses how these models can be integrated into current design flows and describes the benefits of doing this. He also explains details of several real work examples of the technology.

Contact Mach42

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.


Silicon Creations Company Update 2025

Silicon Creations Company Update 2025
by Daniel Nenni on 11-21-2025 at 6:00 am

Silicon Creations PLL

Silicon Creations continues to strengthen its position as one of the most reliable and widely used analog and mixed-signal IP providers in the semiconductor industry. Founded in 2006, the company focuses on high-performance and low-risk IP solutions including PLLs, oscillators, SerDes interfaces, and high-speed differential I/Os. The company’s technology spans a wide range of process nodes—from advanced 2 nm designs to mature nodes used in automotive and industrial applications—making it a trusted partner for SoC designers around the world.

Over the past year, Silicon Creations has achieved several milestones that underscore both its growth and its reputation for excellence. The company surpassed ten million wafers in production containing its IP, reflecting broad adoption across multiple foundries and customers. It also celebrated its thousandth production license for its fractional-N PLL family, a key building block in modern SoC clocking architectures. In addition, Silicon Creations completed its thousandth FinFET tape-out and expanded its portfolio to include fully qualified IP on advanced process nodes such as TSMC’s N2P technology. These achievements demonstrate the company’s ability to maintain pace with the leading edge of semiconductor manufacturing.

Industry recognition continues to follow. Silicon Creations has received multiple “Partner of the Year” awards from leading foundries, highlighting the strength of its engineering quality, customer support, and first-silicon success rate. The company’s IP is used in consumer electronics, data-center processors, networking devices, and automotive systems, giving it a balanced and diversified market exposure. Recently, it has also expanded into high-growth segments such as AI accelerators and chiplet-based architectures. Its clocking and high-speed interconnect IPs are becoming critical enablers for multi-die systems, where precision timing and low jitter are essential.

There are several focus areas for the company in 2025. One is the enablement of next-generation chiplets and die-to-die connectivity with optimized high-speed and low-jitter clocking solutions.  As founding members of the TSMC 3DFabric® Alliance, Silicon Creations is developing specialized clocking IP that supports standards such as UCIe and HBM4, aiming to simplify integration across heterogeneous systems. Second is the continuous innovation in the area of high-speed interface IP, featuring their multi-protocol SerDes in most FinFET nodes, with PCIe (Gen 1-5), (embedded) DisplayPort, and 10G & 25G Ethernet solutions being the most commonly deployed by their customers.  Lastly, the company will continue to invest in automotive-grade IP, meeting stringent safety and reliability standards required by ISO 26262. These efforts position Silicon Creations well for future growth as electronics in vehicles become more complex and compute intensive.

To support its global customer base, Silicon Creations is expanding partnerships and regional presence. A recent collaboration with distribution partners in India is designed to reach emerging semiconductor design clusters. Combined with its established offices in the United States and Poland, the company’s footprint enables close technical collaboration with customers worldwide.

Bottom line: Silicon Creations is well positioned for continued growth. Its strong ecosystem partnerships, advanced-node readiness, and expanding role in emerging architectures such as chiplets and AI processors make it a key player in the semiconductor IP landscape. As SoC and system designers seek proven solutions that reduce risk and accelerate time to market, Silicon Creations stands out as a trusted and technically sophisticated partner poised to thrive through the next wave of semiconductor innovation.

Contact Silicon Creations

About Silicon Creations

Silicon Creations provides world-class silicon intellectual property (IP) for precision and general-purpose timing PLLs, SerDes and high-speed differential I/Os. Silicon Creations’ IP is in mass production from 3 to 180 nanometer process technologies, with 2nm GDS available for deployment. With a complete commitment to customer success, its IP has an excellent record of first silicon to mass production in customer designs. Silicon Creations, founded in 2006, is self-funded and growing. The company has development centers in Atlanta, USA, and Krakow, Poland, and worldwide sales representation. For more information, visit www.siliconcr.com

Also Read:

Silicon Creations at the 2025 Design Automation Conference #62DAC


WEBINAR: Is Agentic AI the Future of EDA?

WEBINAR: Is Agentic AI the Future of EDA?
by Daniel Nenni on 11-20-2025 at 6:00 am

NetApp Cadence Webinar Banner

The semiconductor industry is entering a transformative era, and few trends are generating more discussion or confusion than Agentic AI. From autonomous design exploration to next-generation verification strategies, Agentic AI promises dramatic changes in how chips are conceived, validated, and delivered. But as with any major technology shift, key questions remain: What is real today? What still belongs in the “future potential” category? And what infrastructure foundations are needed to make Agentic AI practical, scalable, and secure inside modern design environments?

Register Now

SemiWiki invites you to join us on Thursday, December 4, 2025 at 10:00 AM PST for a thought-leadership webinar that tackles these questions head-on: “Is Agentic AI the Future for EDA — and What Does It Mean for EDA Infrastructure?” This 60-minute session brings together top experts from Cadence, NetApp, and AMD to explore how Agentic AI is reshaping the EDA landscape and what engineering teams need to prepare for next.

We start with a brief introduction from SemiWiki founder Daniel Nenni, followed by a feature keynote from Mahesh Turaga, VP of Cadence Cloud. Mahesh will dive into the current state of Agentic AI in EDA, separating industry insights from inflated expectations. He’ll also share how Cadence is integrating AI-driven capabilities into its product stack, and what adoption challenges design teams should anticipate as they scale AI across real-world workflows.

The latter half of the event features a dynamic panel discussion with leaders who sit at the crossroads of EDA tools, infrastructure, and advanced chip methodologies:

  • Rob Knoth, Sr Group Director of Strategy & New Ventures, Cadence

  • Janhavi Giri, NetApp EDA Industry Vertical Lead (formerly Intel)

  • Khaled Heloue, Ph.D., Fellow at AMD specializing in CAD, methodology, and AI

  • Moderator: Daniel Nenni, SemiWiki

Each panelist brings a unique perspective—from tool strategy and data architecture to design enablement and compute optimization. Together, they will unpack how Agentic AI may reshape engineering roles, design workflows, compute demands, storage architectures, and the relationship between EDA vendors and internal methodology teams. Expect a grounded, technical discussion aimed at practitioners—not marketing gloss.

Register Now

This webinar is ideal for semiconductor design engineers, EDA and CAD methodology engineers, HPC/EDA infrastructure architects, and IT strategists supporting compute-intensive design environments. Whether you’re exploring early AI integration or actively deploying AI-driven automation, you’ll gain valuable clarity on where the industry is heading.

Key takeaways include:
  • How Agentic AI is redefining next-generation design flows and tool capabilities

  • What infrastructure changes—compute, storage, orchestration, data management—are necessary to support AI-driven EDA

  • Adoption challenges and real-world insights from top EDA and infrastructure leaders

  • Practical guidance for preparing your organization for the shift toward autonomous, AI-augmented design

Agentic AI is a catalyst for the next major evolution in design automation. Join us on December 4th to understand what that means for your tools, your infrastructure, and your engineering roadmap.

Register today and be part of the conversation shaping the future of EDA.

Also Read:

WEBINAR: Revolutionizing Electrical Verification in IC Design

WEBINAR: How PCIe Multistream Architecture is Enabling AI Connectivity


Semiconductors Up Over 20% in 2025

Semiconductors Up Over 20% in 2025
by Bill Jewell on 11-19-2025 at 2:00 pm

Semiconductors Up Over 20% in 2025 3

The world semiconductor market was $208 billion in third-quarter 2025, according to WSTS. This marks the first time the market has been above $200 billion. 3Q 2025 was up 15.8% from 2Q 2025, the highest quarter-to-quarter growth since 19.9% in 2Q 2009. 3Q 2025 was up 25.1% from 3Q 2024, the highest growth versus a year earlier since 28.3% in 4Q 2021.

The table below shows the top twenty semiconductor companies by revenue. The list includes companies which sell devices on the open market. This excludes foundry companies such as TSMC and companies which only produce semiconductors for their internal use such as Apple. The revenue in most cases is for the total company, which may include some non-semiconductor revenue. In cases where revenue is broken out separately, semiconductor revenue is used.

Nvidia remained the dominant number one supplier, with $57.0 billion in revenue. Korean memory companies Samsung and SK Hynix were two and three at $23.9 billion and $17.6 billion, respectively. Memory companies reported robust 3Q 2025 growth from 2Q 2025 with Kioxia up 31%, Micron Technology up 22%, Sandisk up 21%, Samsung up 19%, and SK Hynix up 10%. The strongest quarter-to-quarter growth rates among the non-memory companies were Sony Imaging at 51%, Nvidia at 22%, AMD at 20%, Broadcom at 16% and STMicroelectronics at 15%. MediaTek was the only company to report a revenue decline in 3Q 2025 of -5.5%.

Semiconductor company guidance for 4Q 2025 revenue change is mixed. Of the fourteen companies providing guidance, nine expect increasing revenue ranging from 14% at Nvidia to 1.4% at Renesas Electronics. Five companies guided revenue declines, ranging from -1.3% at Onsemi to -9.2% at Sony Imaging.

AI continues to drive semiconductor market growth with all the memory companies citing AI memory for data centers as the strongest growth area. Nvidia and AMD also attributed most of their growth to AI. Qualcomm and MediaTek are seeing growth in mobile handsets. The automotive segment is seen as generally flat, with some companies adjusting inventories.

Through the first three quarters of 2025, the semiconductor market is up 21.2% from a year ago, according to WSTS data. The market is much stronger than anticipated earlier in the year. The AI market has been booming in 2025, with Nvidia revenues for the first three quarters of 2025 up 62% from a year earlier. The major memory companies have cited AI as their major growth driver and are up 21% over the same time period.

Earlier in the year, many industry analysts (including us at Semiconductor Intelligence) were concerned about the effect Trump administration tariffs would have on the semiconductor market. However, the final tariffs were not as severe as expected and largely exempted semiconductors and electronic products.

Recent forecasts for the 2025 semiconductor market growth range from 14% from Yole Group to 22% from us at Semiconductor Intelligence.

We at Semiconductor Intelligence have not finalized our 2026 semiconductor forecast. Current economic uncertainty will carry over into 2026. The semiconductor market has been overdependent on AI for growth in 2025 and this sector could moderate in 2026. Other sectors which have been weak in 2025 – such as PCs, smartphones and automotive – could see stronger growth in 2026. Our preliminary projection for 2026 is growth in the 12% to 18% range.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

U.S. Electronics Production Growing

Semiconductor Equipment Spending Healthy

Semiconductors Still Strong in 2025


FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges
by Daniel Nenni on 11-19-2025 at 10:00 am

banner

Modern chip design verification often encounters challenges when connecting peripherals, primarily due to drastic differences in operating speed or hardware limitations. Designs running on hardware emulators or FPGA prototyping platforms typically operate at clock frequencies of tens of megahertz, and in some cases even below one megahertz. In contrast, real-world peripherals and protocols, such as PCIe and high-speed Ethernet, operate at hundreds of megahertz or higher. This significant gap in operating speed makes direct connections between the prototype and peripherals almost impossible.

To address speed mismatches, a common and effective solution is the use of a speed adaptor. A speed adaptor is a specialized hardware interface used in prototyping or emulation environments. Its primary function is to bridge systems that operate at very different speeds. This enables verification leveraging real-world transactions rather than pure models. In situations where the hardware does not support a particular peripheral or interface, functional and protocol behavior can be emulated using models and interface IP.

Three typical application cases illustrate the practical use of speed adaptors and memory models in FPGA prototyping:

Case 1: PCIe Speed Adaptor

Speed adaptors address several key challenges, including speed adaptation, protocol conversion, time decoupling, and providing controllability and observability for debugging purposes. In FPGA prototyping, the working frequency of AMD (Xilinx) PCIe PHYs ranges from 62.5 MHz for Gen1 to 500 MHz for Gen4, which is far higher than the operating frequency of synthesized user designs. When a user design is partitioned across multiple FPGA boards, the effective operating frequency can drop below 20 MHz. This creates a substantial mismatch with the PCIe PHY frequency, making reliable speed adaptation critical.

The core solution for PCIe speed adaptation is the PCIe Switch IP. Its multi-port architecture allows independent link establishment and operation in different states. This enables dynamic adaptation of protocol versions, link width, and speed. The solution also integrates essential IP blocks for PCS and PIPE interface conversion, forming a complete approach for speed adaptation in PCIe systems. With this architecture, FPGA prototypes can interface with high-speed PCIe devices while maintaining functional correctness and reliable communication.

Case 2: HDMI Speed Adaptor

In this approach, HDMI audio and video streams are transmitted directly to a host system. A custom decoder extracts the video and audio data, which are then displayed using a software-based simulation of a monitor. Similar architectures are applied to DisplayPort, MIPI DSI, and USB speed adaptors. This method allows verification of high-speed display and multimedia interfaces, even when the FPGA prototyping cannot operate at the full peripheral speed. It ensures that video and audio pipelines can be tested and analyzed under conditions that reflect actual system behavior.

Case 3: Memory Model

FPGA prototyping systems are often limited in the types of memory they can support directly. To validate DDR5, LPDDR5, and HBM2E/3 memory controllers, memory model IP is used to emulate the behavior of these memories using DDR4 hardware available on the FPGA. For system debugging, S2C‘s memory model includes a backdoor that provides controllable and observable access to memory reads and writes. This capability allows efficient testing and validation of memory interfaces. It also supports early detection of design issues and verification of system-level functionality.

Case 3: Memory Model

S2C has built a wide range of speed adaptors, memory models, and over 90 ready-to-use daughter cards to address complex peripheral connectivity challenges. These solutions enable customers to overcome the difficulties associated with connecting high-speed peripherals and unsupported memory to achieve fast deployment.

With more than twenty years of experience in FPGA prototyping, S2C continues to invest in developing and expanding support for additional protocols and interface standards. The company focuses on applying advanced digital EDA technologies in practical prototyping scenarios, helping customers reduce verification cycles and accelerate the time-to-market. By providing reliable speed adaptation and memory modeling solutions, FPGA prototyping can be brought closer to real-world system conditions. This allows engineers to validate designs effectively and efficiently.

Contact S2C Here

Also Read:

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China

Double SoC prototyping performance with S2C’s VP1902-based S8-100

Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System


An Insight into Building Quantum Computers

An Insight into Building Quantum Computers
by Bernard Murphy on 11-19-2025 at 6:00 am

Quantum processor courtesy IBM

Given my physics background I’m ashamed to admit I know very little about quantum computers (QC) though I’m now working to correct that defect. Like many of you I wanted to start with the basics: what are the components and systems in the physical implementation of a quantum “CPU” and how do they map to classical CPUs? I’m finding the answer is not very much. General intent, planar fabrication, even 3D stacking are common, otherwise we must rethink everything else. I’m grateful to Mohamed Hassan (Segment Manager for Quantum EDA, Keysight) for his insights shared in multiple Bootcamp videos. I should add that what follows (and the Keysight material) is based on superconducting QC technologies, one of the more popular of multiple competing QC technologies in deployment and research today.

5 qubit computer – courtesy IBM

What, no gates?

The image above shows a 5-qubit processor, tiny but this immediately untethers me from all my classical logic preconceptions. Each square is a qubit, and the wiggly lines are microwave guides, between qubits and to external connectors. That’s it. No gates, at least no physical gates.

In a rough way qubits are like classical memory bits. It’s tempting to think of the microwave guides as connectors, but that’s not very accurate. Guides connecting to external ports can readout qubit values, but they also control qubits, the first hint of why you don’t need physical gates. A single-input single-output “gate” is implemented by pulsing a qubit with just the right microwave frequency for just the right amount of time to modify that qubit. For example, a Hadamard “gate” will change a pure state, |0> or |1> into a mixed state (|0>+|1>)/√2.

That’s the next shock for a classical logic designer. States in a quantum processor don’t progress through logic. They are manipulated and changed in-place. The reason is apparently self-evident to QC experts, unworthy of explanation as evidenced by the fact that I am unable to find any discussion on this point. My guess is that qubits are so fragile that moving them from one place to another on the chip would instantly destroy coherence (collapsing complex states to pure states) and defeat the purpose of the system.

What about the microwave guides between qubits? This is a bit trickier and at the heart of quantum processing. 2-input (or more input) gates are implemented though a microwave pulse to a controlling qubit which in turn can pulse a target qubit (through one of those qubit-to-qubit connectors) to change its state. This is how controlled NOT (CNOT) gates work to create entangled states.

In short there are no physical gates. Instead, operations (quite different from familiar Boolean operations) are performed by sequences of microwave pulses, modifying qubits in-place. Makes you wonder how the compiler stack works, but that is not something I am qualified to discuss yet.

Keysight Quantum EDA support

The core technologies underlying superconductor-based QCs are Josephson Junctions (JJs), which are the active elements in qubits, and the microwave guides between qubits and to external ports. There is a huge amount of technology here that I won’t even attempt in this short blog, but I will briefly mention the general idea behind the superconducting qubit (for which there are also multiple types). Simple examples I have seen pair a JJ with a capacitor to form a quantum harmonic oscillator. Any such oscillator has multiple quantized resonance energies (quantum theory 101). If suitably adapted this can be reduced to two principal energies, a ground state and an excited state, representing |0> and |1> (or a mix/superposition).

Keysight offer a range of EDA tools in support of building superconducting quantum systems. Their ADS layout platform is already well established for use in building RFICs and, importantly in this context MMICs (monolithic microwave integrated circuits). The same technology with a quantum-specific component library is ideal for building QC layouts. It is also important in building other components of the QC design outside the core processor. A quantum amplifier is needed to boost the tiny signal coming out of the processor – these amplifiers are also built using JJs. It is also important to add attenuators to connectors from the outside non-supercooled world to the processor to minimize photon noise leaking through to the compute element.

Microwave design optimization (using Keysight EM for Quantum CAD) is essential to design not only waveguides but also resonance frequencies with and between qubits. Getting these resonance frequencies right is at the heart of controlling qubits and at the heart of minimizing extraneous noise.

Quantum Circuit Analysis is critical in designing the quantum amplifier and in modeling higher qubit counts. And Quantum System Analysis is a key consideration to optimize for low overall system noise, to check pulsed system response at a system level since qubit gating control is entirely dependent on pulse frequencies and duration.

A quick extra emphasis on noise management. Noise in QCs is far more damaging than it is in classical circuits. Correct QC behavior is completely dependent on maintaining quantum coherence through the operation of a QC algorithm. Noise kills both entanglement and superposition. Production QCs are judged not just by how many qubits they support but also by how long they can maintain coherence – longer times allow for more complex algorithms.

You can learn more about Keysight Quantum EDA HERE. I sat through the full Bootcamp!

Also Read:

Podcast EP317: A Broad Overview of Design Data Management with Keysight’s Pedro Pires

Video EP11: Meeting the Challenges of Superconducting Quantum System Design with Mohamed Hassan

WEBINAR: Design and Stability Analysis of GaN Power Amplifiers using Advanced Simulation Tools