100X800 Banner (1)

3D ESD verification: Tackling new challenges in advanced IC design

3D ESD verification: Tackling new challenges in advanced IC design
by Admin on 12-17-2025 at 10:00 am

fig1 3d structures

By Dina Medhat

Three key takeaways

  • 3D ICs require fundamentally new ESD verification strategies. Traditional 2D approaches cannot address the complexity and unique connections in stacked-die architectures.
  • Classifying external and internal IOs is essential for robust and cost-efficient ESD protection. Proper differentiation enables optimized protection schemes, area savings, and reliable performance.
  • Industry-proven automation tools, like Calibre 3DPERC, are essential to meet evolving ESD verification needs in heterogeneous 3D designs.

Why is ESD verification critical for 3D IC designs?

Electrostatic discharge (ESD) remains one of the most persistent threats to integrated circuits at every step of the lifecycle—from manufacturing through operation. ESD events release a sudden surge of electrical current, which can melt metal, break down junctions, or destroy oxides, leading to costly failures. Effective ESD protection is therefore not just good practice—it is essential for reliability and product lifetime.

How do ESD protection circuits prevent damage?

Successful ESD protection hinges on choosing robust circuit architectures and ensuring that physical implementation matches the design intent. IC designers introduce specific ESD protection schemes at both the schematic and layout stages. Before manufacturing, ESD protection rules are verified to confirm enough safeguards are in place—addressing topology requirements and confirming that interconnects can handle ESD events. Verification at this stage is fundamental to design reliability.

What’s different about ESD protection in 3D ICs?

3D integration is revolutionizing IC design. In 2.5D architectures, dies sit side-by-side atop a silicon interposer. Micro-bumps (or Hybrid-bumps) connect each die to the interposer, and flip-chip bumps connect the interposer to the ball grid array (BGA) substrate. Full 3D integration stacks dies on top of each other, linked by through-silicon vias (TSVs). Designers often mix technologies and process nodes across dies, leveraging different vendor solutions for interposers, packaging and fabrication. Each integration method introduces unique benefits along with specific ESD challenges. Figure 1 demonstrates the main difference between 2.5D integration and 3D integration.

Figure 1: 2.5D versus 3D designs.

Why 3D ESD verification is more complex than 2D and 2.5D

Traditional 2D ESD verification considers all chip pads as interfaces to the outside world, and these demand robust protection against ESD events. In 3D designs, however, many pads serve only as internal die-to-die connections, not external IO interfaces (figure 2). This distinction is crucial:

  • External IOs face ESD events from package pins and require comprehensive protection.
  • Internal IOs are far less exposed and can use smaller, more efficient ESD devices—saving area and cost.
Figure 2: External IOs versus internal IOs.

External IOs are connected to the package pins and face more ESD events than internal IOs. Similar to 2D designs, external IOs are affected by both human body model (HBM) and charged device model (CDM) ESD events. However, internal IOs are affected by far fewer HBM and CDM events. This difference means internal IOs can use smaller ESD protection circuits, which in turn translates into significant savings in die area and cost without sacrificing overall ESD protection robustness.

Furthermore, in 3D ICs, protection needs to be evaluated at the system level—not on a die-by-die basis. This opens the door for exploring the minimum ESD protection needed to avoid the failure of the final 3D product. ESD devices can span multiple dies or reside on a die with a different process node than the signal they protect. Complex connection topologies mean that power clamps, resistors and other ESD components may be shared, increasing verification complexity. There are many ways to optimize your preferred ESD methodology to reduce the final cost for manufacturing the 3D design, but at the cost of ESD verification complexity. Add the challenge of integrating multiple process nodes and vendors, and verification must become architecture-driven, not just foundry-specific.

So to summarize some of the key challenges for 3D ESD verification:

  • Differentiating between ESD protection for external IOs versus internal IOs
  • Handling CDM and HBM constraints for die-to-die connections
  • Determining the minimum ESD protection needed to avoid failure of the final 3D IC product
  • Accounting for different technology nodes / foundries for the dies & handling interfaces
  • Determining how to source from multiple vendors and ensure consistent ESD protection
  • Dealing with different ESD design methodologies

Automating ESD verification for 3D IC designs

Modern ESD verification tools, such as Calibre 3DPERC, address these new challenges head-on The recommended workflow combines die-level and assembly-level analysis. Using Calibre PERC, designers first verify the ESD robustness of each die and interposer. Calibre 3DPERC then performs system-level checks, identifying point-to-point (P2P), topology and geometrical violations across the assembled stack. This robust approach ensures that ESD reliability is maintained throughout heterogeneous 3D architectures (figure 3).

Figure 3: 3D ESD verification methodology (using Calibre 3DPERC).

The bottom line on ESD verification for 3D IC

ESD protection is an essential element in IC design. 2D ESD verification is well-established, 3D architectures require a new mindset and advanced IP to address evolving threats. ESD devices can span multiple dies and need to be combined for correct evaluation. IO type (external vs internal) have different ESD requirements. Moreover, mixing different tech nodes from different foundries contributes to the 3D ESD verification challenges. Designers should consider adopting a newer automated ESD verification methodology to effectively and accurately address the challenges of ESD robustness in 3D designs. Ensuring accurate and consistent 3D ESD protection raises the reliability and product life of these products, ensuring they deliver the value and functionality the market demands.

Author

Dina Medhat is a principal technologist and technical lead for Calibre Design Solutions at Siemens EDA, a part of Siemens Digital Industries Software. She has held multiple product and technical marketing roles in Siemens EDA. She received her B.Sc., M.Sc., and Ph.D. degrees from Ain Shams University in Cairo, Egypt. She coauthored a book chapter in Reliability Characterisation of Electrical and Electronic Systems, (Jonathan Swingler, Editor—Elsevier, 2015). In addition to over 45 publications, she holds a U.S. patent.  Her research interests include reliability verification, electrostatic discharge, emerging technologies, 3D integrated circuits, and physical verification.

Also Read:

Signal Integrity Verification Using SPICE and IBIS-AMI

Propelling DFT to New Levels of Coverage

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation


Navigating SoC Tradeoffs from IP to Ecosystem

Navigating SoC Tradeoffs from IP to Ecosystem
by Daniel Nenni on 12-17-2025 at 8:00 am

Building an SoC is Hard 2025

Building a complex SoC is a risky endeavor that demands careful planning, strategic decisions, and collaboration across hardware and software domains. As highlighted in Darren Jones’ RISC-V Summit presentation from Andes Technology, titled “From Blueprint to Reality: Navigating SoC Tradeoffs, IP, and Ecosystem,” the journey from conceptual design to functional silicon involves navigating numerous tradeoffs while leveraging intellectual property and a supportive ecosystem. This process is not just technical but also strategic, ensuring that the final product meets performance, power, and cost goals while incorporating unique innovations, what Darren calls the “Secret Sauce.”

The first step in SoC development is defining clear goals. Engineers must identify key metrics such as performance benchmarks, power efficiency, area constraints, and market timelines. Understanding the “Secret Sauce” that distinctive feature setting the SoC apart, like advanced AI acceleration or ultra-low power consumption is crucial. A realistic assessment of schedule, resources, and costs prevents overruns. For instance, underestimating integration time can derail projects, as SoCs often combine custom logic with third-party IP.

SoC architecture forms the foundation, deciding what functions are implemented in hardware versus software. Hardware handles time-critical tasks like signal processing, while software offers flexibility for updates. This partitioning affects everything from power usage to scalability. Darren emphasizes modeling the architecture early to simulate tradeoffs, using tools like SystemC for high-level abstraction.

Hardware design involves critical choices: make or buy. Developing custom blocks in-house allows tailoring but increases risk and time. Buying IP, particularly processor cores, accelerates development. Darren focuses on processor IP selection, stressing hardware considerations like Power, performance and Area (PPA). Bus interface compatibility ensures seamless integration, such as with AXI or AHB standards. Customization is key in RISC-V ecosystems, where extensions like vector processing or custom operations enable the “Secret Sauce.” Questions like “Can your vendor enable features you didn’t think possible?” underscore the need for flexible partners.

Proper IP deliverables are non-negotiable: RTL code, testbenches for verification, documentation, and IP-XACT for metadata. Models range from fast instruction-set simulators for software development to cycle-accurate ones for timing validation and SystemC for system-level simulation. Product support and quality silicon-proven designs from reputable vendors mitigate risks. Andes Technology, a leader in RISC-V IP, exemplifies this with their AndesCore CPUs, which support coherent multi-core setups with features like platform-level interrupt controllers and L2 cache managers.

Software considerations are equally vital, often overlapping with hardware choices. Development tools, compilers, and IDEs must support the architecture. Operating system availability, Linux for complex applications, RTOS for real-time systems, or bare-metal for simplicity affects portability. Legacy code porting requires compiler compatibility, while application and firmware development demands efficient toolchains. Third-party code integration, such as DSP or neural network libraries, enhances functionality. Debug tools, including JTAG interfaces and software profilers, are essential for troubleshooting.

Collaboration with IP vendors is a recurring theme. Engaging early facilitates architecture discussions, customization, and benchmarking. Vendors like Andes provide PPA data and models to validate designs. Deliverables empower success: run the testbench to verify IP, read docs thoroughly, and use models for co-simulation. Product support prevents wasted effort. Darren advises contacting vendors promptly and building relationships for ongoing assistance.

In practice, these elements form an ecosystem where tradeoffs are inevitable. Prioritizing power might sacrifice performance, or customization could inflate costs. Successful SoCs, like those in IoT devices or automotive systems, balance these through iterative modeling and vendor partnerships. Andes’ tools, such as their DNN use-case models for neural networks, illustrate how integrated ecosystems support applications from frame capture to AI inferencing.

Bottom Line: Navigating SoC development requires a holistic approach. By knowing goals, architecting wisely, selecting robust IP, addressing software needs, and fostering vendor collaborations, teams can turn blueprints into reality. As Jones concludes, this ecosystem-driven strategy not only mitigates challenges but unlocks innovation, ensuring competitive edges in a fast-evolving semiconductor landscape.

Also Read:

The RISC-V Revolution: Insights from the 2025 Summits and Andes Technology’s Pivotal Role

Beyond Traditional OOO: A Time-Based, Slice-Based Approach to High-Performance RISC-V CPUs

Andes Technology: Powering the Full Spectrum – from Embedded Control to AI and Beyond


Reimagining Architectural Exploration in the Age of AI

Reimagining Architectural Exploration in the Age of AI
by Bernard Murphy on 12-17-2025 at 6:00 am

Rise and Precision flow

This is not about architecting a full SoC from scratch. You already have a competitive platform, now you want to add some kind of accelerator, maybe video, audio, ML, and need to explore architectural options for how accelerator and software should be partitioned, and to optimize PPA. Now we have AI to help us optimize you’d like to run multiple experiments to drive training, from which you can find an optimum to meet your needs.

Challenges for AI-based automation

Reinforcement learning methods are already popular at the back end of design, for optimization against multi-physics analytics as one example. There analysis is very time consuming, so learning leverages a sparse set of sample states, yet the method still delivers meaningful optimization, for thermal among other parameters.

However, optimization for architectural design must explore a much more diverse and rich design space. How many ways can you build an AI accelerator or an MPEG codec? Reinforcement learning here must run over a denser sampling of the design space. Parameter sweeps are too expensive; reinforcement learning instead starts with initial sample states and follows paths towards maximizing a reward function, perhaps over hundreds or even thousands of samples.

This level of sample counts requires that the method to compute a cost function (to first order based on area and timing) must be fast. But equally it must be reasonably accurate, correlating decently well with costs you would get from a production analysis, otherwise fast optimization is meaningless.

The class of architectures we’re considering is most likely to start in software, MATLAB or similar models, to support fast prototyping against realistic application loads. Mapping to hardware for exploration trials will go through C++, System C, or perhaps behavioral SystemVerilog depending on architect/designer preference. Then through high-level synthesis (HLS) to implementable RTL. From there on to implementation: physical synthesis and further if needed to provide cost feedback to reinforcement learning. Cost functions are computed after HLS, also after physical implementation, feeding back to the next round of learning.

Rise DA in partnership with Precision Innovations

This HLS step is an obvious area of strength for Rise DA, who cite they are 10x faster than other HLS tools. They also cite correlation with customer post-synthesis results to within 10% in most cases, thanks to built-in critical path RTL synthesis insight.

While Rise can and does support any RTL synthesis and implementation tool in their flow, the Precision Innovation partnership adds fast implementation estimation data, leveraging the open-source OpenROAD platform (developed in UCSD). Precision position themselves as the Redhat/SUSE parallel to OpenROAD, as OpenROAD is parallel to Linux. Precision cites within 5% accuracy on area and 20% accuracy on timing, with results verified down to 12nm for tapeouts and down to 2nm for estimation. Precision cites 4-20x faster throughput than proprietary tools.

On correlation, Allan Klinck (co-founder at Rise DA) supported by Tom Spyrou (CEO at Precision), add that in many cases Rise see performance correlation within a few percent because they are micromanaging the architecture mapping without needing to add cheat cells. Not surprising to me since these days performance is heavily influenced by architecture. Equally, Tom says that they do not see significant variance by PDKs, even down to 2-3nm. (For Rise, area correlation can show more variance thanks to factors outside their control, like memory sizes.)

In this partnership both ventures also offer a creative licensing model. Licensing is based on number of users and blocks, not number of parallel runs. You can run as many jobs in parallel as you want, especially important in view of learning applications.

Agentic support

Allan added that reinforcement learning is accomplished through an agent, also they offer a design agent. Starting from requirements prompts, it calls the RISE tools to synthesize an untimed design, which can then be refined interactively. That design can be fed into the design-space exploration agent to optimize PPA or used as-is.

I like it. A creative combination of high-level synthesis coupling to a proven open-source RTL2GDS supported by Precision Innovations, together supporting an AI learning-friendly license model. You can learn more about Rise DA HERE, Precision Innovations HERE, and the OpenROAD project HERE.

Also note that Rise and Precision plan a webinar in February 2026.

(OpenROAD was developed by the team now at Precision Innovations while they were funded by DARPA/UCSD, and they continue to contribute to its evolution.)

Also Read:

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Moving Beyond RTL at #62DAC

Generative AI Comes to High-Level Design


S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development
by Daniel Nenni on 12-16-2025 at 10:00 am

cover image(1)

MachineWare, and Andes Technology today announced a collaborative co-emulation solution designed to address the increasing complexity of RISC-V-based chip design. The solution integrates MachineWare’s SIM-V virtual platform, S2C’s Genesis Architect and Prodigy FPGA Prototyping Systems, and Andes’ high-performance AX46MPV RISC-V CPU core, providing a unified environment for hardware and software co-verification.

As RISC-V designs move toward high-performance, multi-core, and highly customized architectures, pre-silicon software development and system validation have become more challenging. This co-emulation solution supports a “shift-left” verification approach, allowing hardware and software teams to work in parallel. The result is reduced development time and lower project risk.

MachineWare’s SIM-V: A High-Performance Virtual Platform

MachineWare contributes its SIM-V full-system virtual platform, based on SystemC TLM-2.0, which offers high simulation speed and extensibility. SIM-V integrates with a broad range of third-party toolchains for debugging, testing, and coverage analysis.

The key strengths of SIM-V lie in its exceptional simulation performance and comprehensive support for Andes RISC-V cores. The platform provides instruction-accurate reference models that fully implement the AndeStar V5 Instruction Set Architecture, including the RISC-V Vector (V) extension. Using the SIM-V Extension API, designers can model, validate, and debug proprietary processor enhancements within a complete system simulation, with full trace and introspection capabilities for detailed visibility. “Our customers need tools that accelerate development without compromising accuracy,” said Lukas Jünger, CEO of MachineWare. “This co‑emulation solution gives them the ability to validate hardware and software in parallel, reduce integration risks, and bring products to market faster than ever before.”

Andes: High-Performance, Customizable RISC-V Cores

Andes Technology contributes its advanced CPU IP, including the high-performance AndesCore™ AX46MPV multicore processor. AX46MPV is an 8-stage superscalar 64-bit RISC-V CPU that supports up to 16 cores with a multi-level cache structure, a powerful Vector Processing Unit (VPU) with up to 1024-bit VLEN and High-Bandwidth Vector Memory (HVM), and ISA customizations via Andes Custom Extension™ (ACE).

With full MMU support for Linux and versatile performance scaling, AX46MPV is well suited for data center AI computation elements, Linux-capable edge AI platforms, and high-performance MPUs in storage, networking, and other performance-critical domains.

“Our customers value our RISC-V IP for its performance, robustness, and ability to add custom extensions that accelerate their key applications.” said Dr. Charlie Su, President and CTO of Andes Technology. “By collaborating with MachineWare and S2C on this co-emulation approach, we’re giving them the ability to evaluate that impact and co-optimize their software stack and silicon architecture before committing to costly silicon tapeout.”

S2C: Bridging Virtual and Physical with Co-Emulation

S2C connects the SIM-V virtual platform to physical hardware through its Genesis Architect and Prodigy FPGA-based prototyping systems. In this hybrid setup, CPU models run in SIM-V while peripheral subsystems execute at high speed on FPGA, connected via a high-speed transactional bridge. This approach provides a realistic system context capable of running full software stacks—from bootloader to application—while retaining detailed debug visibility.

Key Use Cases & Customer Benefits

The joint solution supports multiple critical development stages:

  • Pre-silicon software development
  • Hardware/software co-verification
  • System performance analysis and tuning
  • Custom ISA extension development and debug

“Through co-emulation, our customers can accelerate time-to-market, reduce costs, and ensure software maturity—while benefiting from both cycle-accurate debugging and high-speed execution,” said Ying, VP of S2C. “But we can’t achieve this alone. We will continue to build on the high-performance advantages of hardware-assisted verification and work closely with our partners to collaboratively deliver shift-left solutions across the ecosystem.”

Looking Ahead

S2C, MachineWare, and Andes remain committed to advancing verification methodologies and providing scalable, efficient, and robust development tools for the RISC-V community. Together, the companies aim to strengthen the ecosystem for next-generation RISC-V chip design.

About MachineWare

MachineWare GmbH, headquartered in Aachen, Germany, is a leading provider of high-speed virtual prototyping solutions for pre-silicon software validation and testing. Its flagship platform, SIM‑V, delivers industry-leading RISC‑V simulation performance and extensibility, enabling accurate modeling of complex SoCs and custom ISA extensions. The company serves diverse sectors including AI, automotive, and telecommunications.

About Andes Technology

As a Founding Premier member of RISC-V International and a leader in commercial CPU IP, Andes Technology is driving the global adoption of RISC-V. Andes’ extensive RISC-V Processor IP portfolio spans from ultra-efficient 32-bit CPUs to high-performance 64-bit Out-of-Order multiprocessor coherent clusters. With advanced vector processing, DSP capabilities, the powerful Andes Automated Custom Extension (ACE) framework, end-to-end AI hardware/software stack, ISO 26262 certification with full compliance, and a robust software ecosystem, Andes unlocks the full potential of RISC-V, empowering customers to accelerate innovation across AI, automotive, communications, consumer electronics, data centers, and mobile devices. Over 17 billion Andes-powered SoCs are driving innovations globally. Discover more at www.andestech.com and connect with Andes on LinkedInX (formerly Twitter) , YouTube and Bilibili.

About S2C

S2C is a leading global supplier of FPGA prototyping solutions for today’s innovative SoC and ASIC designs, now with the second largest share of the global prototyping market. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 600 customers, including 11 of the world’s top 25 semiconductor companies, our world-class engineering team and customer-centric sales team are experts at addressing our customer’s SoC and ASIC verification needs. S2C has offices and sales representatives in San Jose, Seoul, Tokyo, Shanghai, Hsinchu, India, Europe and ANZ.

Also Read:

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China

Double SoC prototyping performance with S2C’s VP1902-based S8-100


Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs

Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs
by Daniel Nenni on 12-16-2025 at 8:00 am

Alphacore Presentation Spain 2025

In the evolving landscape of telecommunications, uncrewed aerial vehicles (UAVs) are emerging as innovative platforms for extending 5G networks, particularly in areas lacking terrestrial infrastructure. Dr. Jyrki T. J. Penttinen’s paper, presented at the First International Conference on AI-enabled Unmanned Autonomous Vehicles and Internet of Things for Critical Services (AIVTS 2025) in Barcelona, explores the design feasibility and link budget assessment of UAV-mounted 5G systems for Internet of Things and enhanced Mobile Broadband (eMBB) connectivity. As a Senior Program Manager at Alphacore Inc., Penttinen draws on his extensive background in cellular technologies, from network planning to 5G security, to propose a practical, commercial off-the-shelf solution for rapid, ad-hoc deployments.

The study addresses a critical gap in current research on UAV-assisted networking. While initiatives like AT&T’s 5G Cell on Wings demonstrate temporary coverage extension for disasters or events, existing studies often overlook trade-offs in UAV altitude, frequency bands, and real-world COTS components. Penttinen’s novelty lies in conceptualizing a 3GPP-defined Non-Public Network architecture tailored for aerial use. He evaluates a standalone NPN model, ideal for isolated, quick-to-deploy scenarios without mobile network operator dependency. This contrasts with public network-integrated variants, which could share RAN or core elements but introduce complexities in aerial contexts.

At the core of the design is a minimal viable 5G SNPN hosted on a single UAV, equipped with a lightweight gNB (e.g., Amarisoft Callbox Mini) for local UE connectivity. The system supports extensions to multi-UAV swarms via PC5 sidelinks, forming a mesh RAN. Equipment considerations emphasize feasibility: a 400-800g integrated small cell, 100-200g compute module like Raspberry Pi, and a 1.5-2kg battery pack, totaling 2.5-3kg payload—suitable for medium drones like DJI Matrice 300 RTK. Intelligence features, initially manual and GPS-assisted, could evolve to AI-driven UE-following based on signal heuristics.

Penttinen contrasts eMBB and IoT use cases through radio link budgets, highlighting their distinct parameters. eMBB targets high data rates (10 Mb/s–1 Gb/s) with wider bandwidths (20-400 MHz) and higher SNR (8-15 dB), while IoT prioritizes robust, low-power connectivity (50 b/s–1 Mb/s) using narrow bands (180 kHz–20 MHz) and negative SNR (-13 to -3 dB) with repetitions. Fade margins are lower for eMBB (3-5 dB) than IoT (10-15 dB), reflecting sensor placements in challenging environments.

Using ITU-R propagation models (P.525 and P.1411), the analysis quantifies performance in open/rural areas across low (1 GHz), mid (3.5/6 GHz), and high (24/28 GHz) bands, with UAV altitudes from 50m to 400m. For eMBB, path loss increases with altitude and frequency, limiting high-band viability beyond short ranges. At 50m altitude, high bands deliver up to 2 Gb/s near the UAV but drop sharply; mid-bands offer consistent 250-500 Mb/s. At 400m, low bands provide steady coverage for modest rates, while high bands become obsolete. Cell ranges exceed 1km for low/mid bands but shrink at higher altitudes due to earth curvature (radio horizon ~35km at 100m).

IoT scenarios fare better, with NB-IoT achieving maximum coupling loss of 134-136 dB, reaching horizons limited by geometry, not RF. LTE-M covers tens of kilometers at 1 GHz, while RedCap, with higher rates (150 Mb/s), is confined to single-digit km at 3.5 GHz. Overall, IoT outperforms eMBB in coverage, benefiting from narrowband and low SNR tolerances.

Bottom line: The results underscore trade-offs: frequency selection is pivotal for balancing coverage and capacity in aerial eMBB, while IoT enables vast areas with feasible parameters. Penttinen concludes that COTS-based UAV-5G is viable and cost-effective compared to terrestrial alternatives, paving the way for multi-UAV swarms and AI-optimized positioning. This work, acknowledging input from Arizona State University experts, highlights UAVs’ potential in critical services, from disaster response to remote IoT monitoring, as 5G adoption surges toward surpassing 4G by 2028.

Contact Alphacore here.

About Alphacore
Alphacore Inc., founded in 2012, is located in the innovative Silicon Desert of Arizona’s technology center and is known for its innovations in rigorous data conversion microelectronics. Our verified highspeed, low power data conversion IP products available on latest technology nodes optimize time-tomarket for demanding commercial or radiation tolerant specifications.

Our engineering and leadership team combines long histories of delivering innovative data converter, radio-frequency (RF), analog and mixed signal products, and complete imaging systems for critical systems, through business success at companies from multi-nationals to startups. Our design team includes seasoned “Radiation-Hardened-By-Design” (RHBD) experts, and we specialize in designing high performance converter microelectronics, and reliability or authentication tools for niche needs of demanding segments, including scientific research, aerospace, defense, medical imaging, and homeland security.

Also Read:

A Tour of Advanced Data Conversion with Alphacore

Alphacore at the 2024 Design Automation Conference

Analog to Digital Converter Circuits for Communications, AI and Automotive


A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design

A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design
by Mike Gianfagna on 12-16-2025 at 6:00 am

A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design

Electrical rule checking (ERC) is a standard part of any design flow. There is a hidden problem with the traditional approach, however. As designs grow in complexity, whether full-custom analog, mixed-signal, or advanced-node digital, the limitations of traditional ERC tools are becoming more problematic. This can lead to missing subtle but dangerous electrical errors/failures. Aniah has developed a fundamentally new approach to ERC. Using the company’s ERC platform, every net is analyzed in all its electrical states and all electrical errors are detected early, automatically, and without tedious setup.

Aniah recently presented a webinar on the perils of traditional ERC and the benefits of its new approach. A very detailed analysis of the problem and the solution is presented, along with a live demonstration. If hidden ERC errors worry you, this is a must-see webinar. A replay link is coming but first let’s examine what is covered in a webinar about electrical verification – the invisible bottleneck in IC design.

The Webinar Presenters

The webinar is kicked off by Blandine Guivier – Aniah sales director for Europe. Blandine has over 12 years of experience in the semiconductor and EDA industries. She leads Aniah’s European business, helping semiconductor companies accelerate innovation through smarter electrical verification with Aniah’s OneCheck® platform.

After Blandine’s overview, Meryam Bouaziz conducts a live demonstration. She is an application engineer at Aniah. Meryam works with semiconductor companies to deploy Aniah’s OneCheck platform to improve the overall design reliability flow. Combining strong technical expertise, she worked as an analog design engineer in the beginning of her career for two years and then joined Aniah in 2023.

The demo is followed by a very useful Q&A session where questions from the live audience are answered. The entire event is under 40 minutes, so a lot of good information is covered very efficiently.

The Presentation

Blandine covers ERC fundamentals and the challenges presented by advanced designs. She then discusses Aniah’s OneCheck platform, covering adoption, breakthrough features and resulting design confidence. Regarding the fundamentals of ERC, the slide below, taken from the webinar summarizes what ERC is and why it’s important.

ERC Fundamentals
Verification Gap

Next, Blandine discusses how verification productivity has not kept up with design and manufacturing productivity, resulting in a verification productivity gap as shown in the diagram to the right. Blandine covers the details behind this gap during her presentation.

She then describes Aniah’s OneCheck as the industry’s first shift-left ERC solution. Blandine goes into a lot of detail regarding how OneCheck can make a significant impact in verification productivity and design confidence.

Some top-level points she begins with include:

  • The platform is easy to adopt and simple to use, without the need for tech files from the foundry
  • It empowers designers and CAD teams with unprecedented error coverage
  • And it delivers a drastic reduction in false errors

Blandine provides detailed information to show how OneCheck achieves these goals. Customer experience and comments are included, as well as details of how the tool is integrated into existing design flows. Details about the range of errors found by OneCheck is also provided. The breadth of coverage is impressive. Specifics of how OneCheck performs its unique analysis are also presented.

The Demonstration

Meryam then provides a live demonstration of OneCheck. The demo focuses on three main topics:

  • The ease of use of OneCheck
  • The ease of debug for the errors found
  • The quality of the coverage (reduction in false errors)

Only the circuit description is needed, no tech files so startup is indeed easy. Meryam selects a subset of errors to check for and runs the tool. She then goes through a detailed discussion of what errors were found, how to group them and how to determine the root cause. You will need to see this part for yourself to get a feeling for ease of use and clarity of results. I found it quite easy to follow the setup and analysis.

OneCheck is integrated with Cadence, so errors found can be cross-probed directly in Virtuoso. This clearly eases the debugging process.  Meryam examines several errors this way, performing analysis in OneCheck and cross-probing directly to Virtuoso. The process is quite impressive – you should see it for yourself.

To Learn More

If verification is getting more difficult for you as designs get larger and more complex, Aniah can help reduce this problem with its unique OneCheck ERC platform. You should definitely check it out. The time will be well-spent. You can access the webinar replay here. And that’s a webinar about electrical verification – the invisible bottleneck in IC design.

Also Read:

WEBINAR: Revolutionizing Electrical Verification in IC Design

Aniah at the 2025 Design Automation Conference #62DAC

Aniah and Electrical Rule Checking (ERC) #61DAC


Signal Integrity Verification Using SPICE and IBIS-AMI

Signal Integrity Verification Using SPICE and IBIS-AMI
by Daniel Payne on 12-15-2025 at 10:00 am

IBIS AMI min

High-speed signals enable electronic systems by using memory interfaces, SerDes channels, data center backplanes and connectivity in automobiles.  Challenges arise from signal distortions like inter-symbol interference, channel loss and dispersion effects. Multi-gigabit data transfer rates in High-Bandwidth Memory (HBM) and Double Data Rate (DDR) require equalization techniques like Continuous Time Linear Equalization (CTLE) and Decision Feedback Equalization (DFE) to ensure adequate eye openings.

Verifying high-speed links calls for SPICE accuracy using IBIS and IBIS-AMI models. For chip-level and block-level circuits you still need SPICE models. Siemens has developed a tool that simulates IBIS, IBIS-AMI, S-parameter interconnect and SPICE together, enabling high-speed link verification.

IBIS-AMI

With IBIS-AMI you can now simulate the equalization effects, CTLE and DFE for high-speed channel simulations.

The Rx IBIS-AMI data-flow model has algorithmic descriptions for each function: CTLE, DFE, Clock Recovery.

In the past you could simulate a SPICE netlist, S-parameter files and IBIS models with one simulator, then another separate tool for IBIS-AMI, creating multiple iterations. Now, it’s possible verify these circuits using Solido SPICE, reducing both iterations and verification times.

Solido Simulation Suite

Last year Siemens announced Solido Simulation Suite:

  • Solido SPICE
  • Solido LibSPICE
  • Solido FastSPICE

The Solido SPICE simulator supports IBIS-AMI models, IBIS, S-parameters, lossy coupled transmission line models and DSPF used for high-speed signaling projects. Verification using Solido SPICE shows nonlinear effects in AMI equalized eye openings in the presence of channel, I/O buffer, and power delivery.

DDR5

With Solido SPICE it’s possible to verify Chip-to-Chip (C2C) and Chip-to-Module (C2M) designs, where the chip is at SPICE-level, and the Rx receive-side chip has IBIS and IBIS-AMI models. For a DDR5 application the C2C verification of Read and Write cycles use either Tx or Rx modeled with IBIS and IBIS-AMI. Digital equalization is modeled with IBIS-AMI, then Solido SPICE simulates the mix of SPICE-level and all models.

SerDes

Shown below is the Rx eye diagram for an AMI processed output waveform on a 28Gbaud PAM4 C2C interface. The Tx is modeled at SPICE-level, PCB interconnects are included, dielectric losses and edge connector are modeled, and the receive-side Rx uses IBIS-AMI for the CTLE and DFE equalization.

NRZ and PAM4

Clock and Data Recovery (CDR) functions can be modeled with IBIS-AMI to support both Non-Return-to-Zero (NRZ) and Pulse Amplitude Modulation (PAM4) encoding. Solido SPICE enables SPICE-level verification of both high-speed DDR5 and SerDes chip designs using models for IBIS and IBIS-AMI, so that chips from other manufacturers can be simulated in your electronic system.

HyperLynx SI

With the HyperLynx SI tool a system designer can do signal integrity analysis at the system and board-level, all without having to be an SI expert. Users of Solido SPICE are getting shared IBIS-AMI technology from HyperLynx, ensuring consistent analysis results when alternating between circuit and system-level tools.

Summary

High-speed signaling requires an accurate verification approach. SPICE simulation is capable to capture non-linear effects down to the transistor-level. Therefore, supporting IBIS-AMI with SPICE to capture non-linear effects is crucial to such applications including SerDes, HBM and other applications with PAM4. Solido SPICE is able to verify these complex systems accurately and quickly.

Read the entire 10 page White Paper from Siemens online, Combining SPICE with IBIS-AMI: Solving advanced signal integrity verification challenges with Solido SPICE.

Related Blogs


WEBINAR: Why Network-on-Chip (NoC) Has Become the Cornerstone of AI-Optimized SoCs

WEBINAR: Why Network-on-Chip (NoC) Has Become the Cornerstone of AI-Optimized SoCs
by Admin on 12-15-2025 at 8:00 am

AION Silicon Arteris Webinar

By Andy Nightingale, VP of Product Management and Marketing

As AI adoption accelerates across markets, including automotive ADAS, large-scale compute, multimedia, and edge intelligence, the foundations of system-on-chip (SoC) designs are being pushed harder than ever. Modern AI engines generate tightly coordinated, data-intensive activity that places enormous stress on on-chip bandwidth and overall system efficiency. This pressure on data movement increasingly turns traditional interconnects into bottlenecks. Network-on-Chip (NoC) technology has emerged as an enhanced architectural solution that improves scalability, enabling teams to meet performance, power, and integration goals.

To address these challenges, teams are reevaluating the structure of on-chip communication, bringing physical considerations into NoC planning early in the flow, rather than treating physical implementation as a later, isolated step.

Arteris and AION Silicon recently partnered to present a webinar focused on physically aware SoCs and silicon-proven NoC deployments. The session, titled “Considerations When Architecting Your Next SoC: NoC,” is now available on demand and offers engineers practical, experience-based insights into NoC methodology, performance modeling, and real-world implementation.

Webinar Takeaways:

  • Deep Technical Insights Into AI-Driven SoC Requirements
    How emerging AI compute patterns shape data movement, coherence, and predictability—creating a clear need for scalable, high-performance NoCs.
  • Comprehensive Exploration of NoC Topology Choices
    How adaptable topologies align with floorplans and system objectives, and how topology decisions influence SoC behavior.
  • Physically Aware NoC Methodology
    How teams use Arteris FlexNoC to guide early architectural decisions, streamline integration, and achieve timing closure with predictable results.
  • Performance Modeling and KPI-Driven Analysis
    How modeling helps evaluate system-wide tradeoffs across compute, memory, and interconnect—ensuring decisions optimize whole-chip execution rather than isolated blocks.
  • Real Examples From Production-Class SoCs
    Case studies showing how advanced NoC design accelerates development, including comparisons between automated FlexGen flows and manual approaches.
  • Actionable NoC Deployment Best Practices
    A structured look at the complete NoC deployment process, including coherency strategies, power and clock coordination, and complexities introduced by multi-die architectures.
  • Strategic Competitive Advantages for Engineers
    How optimized NoCs improve design robustness and scalability, and how proven tooling and integration practices enable teams to move faster with greater certainty.
Why This Matters Now

Achieving cohesive, efficient SoCs depends on interconnect solutions that support both architectural goals and physical implementation realities. NoC technology provides the structured, scalable framework required to coordinate complex on-chip communication with confidence.

The on-demand webinar, featuring Arteris and AION Silicon, offers practical guidance based on production experience. Viewers will gain a clear understanding of how a disciplined NoC strategy strengthens system integration and improves predictability in advanced AI-driven SoCs.

If you’re planning an AI-focused chip, this is a session you won’t want to miss.

WATCH THE RECORDING

Presenters:

Andy Nightingale, VP of Product Management and Marketing at Arteris

Piyush Singh, Principal Digital SoC Architect at AION Silicon

Also Read:

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

Arteris Simplifies Design Reuse with Magillem Packaging

Arteris at the 2025 Design Automation Conference #62DAC


Quantum Computing Algorithms and Applications

Quantum Computing Algorithms and Applications
by Bernard Murphy on 12-15-2025 at 6:00 am

Quantum computer chip

In an upcoming Innovation blog we’ll get into how quantum computers are programmed. Here I’d like to look more closely at algorithms beyond Grover and Shor, and what practical applications there might be for quantum computing. I also take a quick look at what analysts are saying about potential market size. Even more than in AI, this is a field where it can be difficult to separate promise from reality, especially when core concepts are so alien to conventional compute ideas. I found my research on the topic helped me develop a somewhat clearer view.

Algorithms

First a nod to Grover’s and Shor’s algorithms, the best-known in this area. Grover’s algorithm searches a list for a candidate that best meets some objective, say largest number in a list. For an unsorted list, Grover’s method is quadratically faster than classical searches. Shor’s algorithm factorizes large integers and is exponentially faster than the best classical algorithms.

The Quantum Fourier Transform underlies the Shor algorithm and is conceptually similar to the classical discrete Fourier transform.

Don’t get carried away by exponential speedups. Many algorithms are no faster or only polynomially faster when run on quantum. I may get more into that in another blog in this series.

Quantum Phase Estimation is the foundation for eigenvalue (allowed state energies) estimation in quantum chemistry. The equivalent in classical computing would be a numerical solution to a differential calculus problem. Variational Quantum Eigensolvers (VQE) further extend these ideas to find ground state (minimum) energies in quantum chemistry and materials science. Quantum Approximate Optimization Algorithms (QAOA) are useful for combinatorial optimization problems such as the traveling salesman problem or selecting a subset of discrete options from a larger set as an input to drive classical optimization algorithms.

Applications

I can’t find any claim of production applications today. Looking forward, there are a few important considerations to factor into when we might see such applications. First, there is a clear trend to algorithms which rely on a hybrid of classical and quantum computation, where algorithms intentionally switch back and forth between classical and quantum stages. Classical is used for parts of an algorithm where no speedup is required and quantum is used for core components of the algorithm where speedup is very much required.

Second, there is a useful classification representing quantum compute today as an era of Noisy Intermediate Scale Quantum (NISQ) – 10-100+ qubits, allowing only for short coherence times. This era is expected to run through ~2030. The subsequent era of Fault Tolerant Quantum Computing (FTQC) should allow for 1000s of ideal qubits (with quantum error correction) and hours of hybrid computation time.

Third, quantum problems in chemistry, physics, and materials science are in many ways the easiest fit for this kind of computation and are therefore areas we might see fastest progress, though applications in high demand non-science domains could also be contenders.

Many claims seem to depend on FTQC, however a few that might be attainable in the NISQ era. Fourier transform algorithms on conventional computers are used extensively today in X-ray crystallography as guidance to materials and biotech molecular structure analysis. Such analysis does not scale in complexity as fast as direct molecular modeling, which is currently limited to small molecules.

An urgent demand that might accelerate development is quantum sensing against terrestrial magnetic field maps as a hack-proof alternative to GPS. There are also arguments for use in finance for portfolio design and credit scoring. Check out this video, starting at about 23:30 for a discussion of a portfolio optimization application.

Later applications are more ambitious, including a new approach to generating ammonia for agricultural fertilizer, improving battery design, better understanding of how drugs are metabolized in the body, and improving understanding of (nuclear) fusion reactions.

Analyst views on market size

McKinsey in 2024 projected that the total quantum technologies market (quantum computing, quantum communication and quantum sensing) could grow up to $97B by 2035, with quantum computing contributing between $28B and $72B of that number. They also project the quantum technologies market could be as large as $200B by 2040.

These are wide ranges and long time-horizons, not encouraging for early investors. BCG says that opportunities in the NISQ era are not panning out as well as they projected in their 2021 report, though they still see active growth from 2030 to 2040 and major growth beyond that point. Meantime they forecast a market for tech providers valued between $1B and $2B by 2030, the bulk of that coming from public sector and corporate investment.

My view for what it’s worth started as “never going to get out of university labs”, to thinking I might be completely wrong given the scale of public and private investment, now to somewhere in between. Finance and security/safety critical drivers like an alternative to GPS, together with technology advances in quantum error correction and hybrid flows, might accelerate progress at least in some applications, possibly within the NISQ era.

Failing dramatic breakthroughs, I am sure quantum accelerators will become important eventually, perhaps by 2040, though it is interesting that claims of supremacy often seem to supercharge algorithm advances on classical computers! Benefiting all markets, though not so good for quantum.

Also Read:

Superhuman AI for Design Verification, Delivered at Scale

The Quantum Threat: Why Industrial Control Systems Must Be Ready and How PQShield Is Leading the Defense

AI Deployment Trends Outside Electronic Design


imec on the Benefits of ASICs and How to Seize Them

imec on the Benefits of ASICs and How to Seize Them
by Daniel Nenni on 12-14-2025 at 2:00 pm

imec ASIC White Paper

In an era where product differentiation increasingly depends on performance, power efficiency, and form factor, Application-Specific Integrated Circuits (ASICs) have become the ultimate competitive weapon for innovative companies. Unlike off-the-shelf processors, FPGAs, or even ASSPs, a full- or semi-custom ASIC is engineered from the ground up (or from proven building blocks) for one specific application. The result: dramatically lower unit cost at volume, smaller size, lower power consumption, higher performance, better supply-chain control, and strong IP protection.

The whitepaper published by imec’s IC-Link division makes a compelling case that ASICs are no longer reserved for tech giants. Advances in design tools, IP reuse, multi-project wafer (MPW) shuttles, and mature foundry ecosystems have made custom silicon accessible even to startups and mid-sized companies.

Key advantages at a glance
  1. Cost efficiency at scale: While NRE can reach tens of millions for leading-edge nodes (e.g., ~$40 M for a 7 nm high-performance design), the per-die cost drops sharply. The paper shows a realistic 7 nm example where the ASIC unit price lands at $50–60 versus $90–100 for comparable commercial CPUs, delivering ROI of 1.26× over a nine-year lifecycle at 250 k units/year.
  2. Miniaturization & integration: Combining multiple functions into one die (or advanced SiP) shrinks the solution dramatically—critical for wearables, implants, and IoT devices.
  3. Ultra-low power: Shorter interconnects, optimized power management, and removal of unused blocks routinely cut power by 5–10× compared with FPGA or discrete implementations.
  4. Performance: Tailored datapaths, dedicated accelerators, and shorter signal paths deliver the highest throughput at the lowest energy.
  5. Supply-chain resilience & IP protection: A single chip replaces dozens of components from multiple vendors, eliminating obsolescence risks highlighted during the COVID shortage era. Reverse-engineering a modern ASIC is also prohibitively expensive.
Real-world proof points
  • Capri-Medical’s injectable migraine implant: reduced from a 3-hour surgery to a 20-minute outpatient procedure thanks to an ultra-small, multi-function ASIC.
  • Wiyo’s battery-less Wi-Fi-powered smart tags: only an ASIC could meet the microwatt power budget while harvesting energy from ambient 2.4 GHz signals.
  • Frontgrade Gaisler’s radiation-hardened space processors and Arm’s Morello secure SoC: both rely on custom silicon to satisfy extreme reliability and performance requirements that no commercial part can match.

The ASIC journey demystified The whitepaper walks readers through every phase: from initial feasibility and system architecture, through detailed ASIC specification, RTL-to-GDSII design (digital) and full-custom layout (analog), co-design of chip and advanced package, mask making, assembly, test development, qualification, and finally volume production. IC-Link emphasizes early package co-design and the importance of design-for-test (DFT) to avoid costly test escapes.

Why partner with IC-Link? As imec’s dedicated ASIC service arm, IC-Link offers two flexible business models:

  • Full Turnkey (low risk): one single point of contact from specification to delivered tested parts.
  • Customer-Owned Tooling (lowest cost): customers select only the services they need while retaining full control.

With direct access to TSMC, GlobalFoundries, UMC, and specialty processes (including imec’s own radiation-hard libraries), plus decades of packaging and test expertise, IC-Link has supported everything from 180 nm medical implants to 7 nm memory PHYs and complex 18-layer security SoCs.

Bottom Line: For any company facing size, power, cost, or supply-chain constraints—and targeting volume production—an ASIC is no longer a “nice-to-have.” It is rapidly becoming a strategic necessity. The imec IC-Link whitepaper convincingly shows that the barriers to entry have fallen: the expertise, tools, and manufacturing capacity now exist to bring custom silicon within reach of virtually any serious innovator. The question is no longer “Can we afford an ASIC?” but “Can we afford not to have one?”

Also Read:

Revitalizing Semiconductor StartUps

Podcast EP320: The Emerging Field of Quantum Technology and the Upcoming Q2B Event with Peter Olcott

Live Webinar: Considerations When Architecting Your Next SoC: NoC with Arteris and Aion Silicon