RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Reimagining Architectural Exploration in the Age of AI

Reimagining Architectural Exploration in the Age of AI
by Bernard Murphy on 12-17-2025 at 6:00 am

Rise and Precision flow

This is not about architecting a full SoC from scratch. You already have a competitive platform, now you want to add some kind of accelerator, maybe video, audio, ML, and need to explore architectural options for how accelerator and software should be partitioned, and to optimize PPA. Now we have AI to help us optimize you’d like to run multiple experiments to drive training, from which you can find an optimum to meet your needs.

Challenges for AI-based automation

Reinforcement learning methods are already popular at the back end of design, for optimization against multi-physics analytics as one example. There analysis is very time consuming, so learning leverages a sparse set of sample states, yet the method still delivers meaningful optimization, for thermal among other parameters.

However, optimization for architectural design must explore a much more diverse and rich design space. How many ways can you build an AI accelerator or an MPEG codec? Reinforcement learning here must run over a denser sampling of the design space. Parameter sweeps are too expensive; reinforcement learning instead starts with initial sample states and follows paths towards maximizing a reward function, perhaps over hundreds or even thousands of samples.

This level of sample counts requires that the method to compute a cost function (to first order based on area and timing) must be fast. But equally it must be reasonably accurate, correlating decently well with costs you would get from a production analysis, otherwise fast optimization is meaningless.

The class of architectures we’re considering is most likely to start in software, MATLAB or similar models, to support fast prototyping against realistic application loads. Mapping to hardware for exploration trials will go through C++, System C, or perhaps behavioral SystemVerilog depending on architect/designer preference. Then through high-level synthesis (HLS) to implementable RTL. From there on to implementation: physical synthesis and further if needed to provide cost feedback to reinforcement learning. Cost functions are computed after HLS, also after physical implementation, feeding back to the next round of learning.

Rise DA in partnership with Precision Innovations

This HLS step is an obvious area of strength for Rise DA, who cite they are 10x faster than other HLS tools. They also cite correlation with customer post-synthesis results to within 10% in most cases, thanks to built-in critical path RTL synthesis insight.

While Rise can and does support any RTL synthesis and implementation tool in their flow, the Precision Innovation partnership adds fast implementation estimation data, leveraging the open-source OpenROAD platform (developed in UCSD). Precision position themselves as the Redhat/SUSE parallel to OpenROAD, as OpenROAD is parallel to Linux. Precision cites within 5% accuracy on area and 20% accuracy on timing, with results verified down to 12nm for tapeouts and down to 2nm for estimation. Precision cites 4-20x faster throughput than proprietary tools.

On correlation, Allan Klinck (co-founder at Rise DA) supported by Tom Spyrou (CEO at Precision), add that in many cases Rise see performance correlation within a few percent because they are micromanaging the architecture mapping without needing to add cheat cells. Not surprising to me since these days performance is heavily influenced by architecture. Equally, Tom says that they do not see significant variance by PDKs, even down to 2-3nm. (For Rise, area correlation can show more variance thanks to factors outside their control, like memory sizes.)

In this partnership both ventures also offer a creative licensing model. Licensing is based on number of users and blocks, not number of parallel runs. You can run as many jobs in parallel as you want, especially important in view of learning applications.

Agentic support

Allan added that reinforcement learning is accomplished through an agent, also they offer a design agent. Starting from requirements prompts, it calls the RISE tools to synthesize an untimed design, which can then be refined interactively. That design can be fed into the design-space exploration agent to optimize PPA or used as-is.

I like it. A creative combination of high-level synthesis coupling to a proven open-source RTL2GDS supported by Precision Innovations, together supporting an AI learning-friendly license model. You can learn more about Rise DA HERE, Precision Innovations HERE, and the OpenROAD project HERE.

Also note that Rise and Precision plan a webinar in February 2026.

(OpenROAD was developed by the team now at Precision Innovations while they were funded by DARPA/UCSD, and they continue to contribute to its evolution.)

Also Read:

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Moving Beyond RTL at #62DAC

Generative AI Comes to High-Level Design


S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development
by Daniel Nenni on 12-16-2025 at 10:00 am

cover image(1)

MachineWare, and Andes Technology today announced a collaborative co-emulation solution designed to address the increasing complexity of RISC-V-based chip design. The solution integrates MachineWare’s SIM-V virtual platform, S2C’s Genesis Architect and Prodigy FPGA Prototyping Systems, and Andes’ high-performance AX46MPV RISC-V CPU core, providing a unified environment for hardware and software co-verification.

As RISC-V designs move toward high-performance, multi-core, and highly customized architectures, pre-silicon software development and system validation have become more challenging. This co-emulation solution supports a “shift-left” verification approach, allowing hardware and software teams to work in parallel. The result is reduced development time and lower project risk.

MachineWare’s SIM-V: A High-Performance Virtual Platform

MachineWare contributes its SIM-V full-system virtual platform, based on SystemC TLM-2.0, which offers high simulation speed and extensibility. SIM-V integrates with a broad range of third-party toolchains for debugging, testing, and coverage analysis.

The key strengths of SIM-V lie in its exceptional simulation performance and comprehensive support for Andes RISC-V cores. The platform provides instruction-accurate reference models that fully implement the AndeStar V5 Instruction Set Architecture, including the RISC-V Vector (V) extension. Using the SIM-V Extension API, designers can model, validate, and debug proprietary processor enhancements within a complete system simulation, with full trace and introspection capabilities for detailed visibility. “Our customers need tools that accelerate development without compromising accuracy,” said Lukas Jünger, CEO of MachineWare. “This co‑emulation solution gives them the ability to validate hardware and software in parallel, reduce integration risks, and bring products to market faster than ever before.”

Andes: High-Performance, Customizable RISC-V Cores

Andes Technology contributes its advanced CPU IP, including the high-performance AndesCore™ AX46MPV multicore processor. AX46MPV is an 8-stage superscalar 64-bit RISC-V CPU that supports up to 16 cores with a multi-level cache structure, a powerful Vector Processing Unit (VPU) with up to 1024-bit VLEN and High-Bandwidth Vector Memory (HVM), and ISA customizations via Andes Custom Extension™ (ACE).

With full MMU support for Linux and versatile performance scaling, AX46MPV is well suited for data center AI computation elements, Linux-capable edge AI platforms, and high-performance MPUs in storage, networking, and other performance-critical domains.

“Our customers value our RISC-V IP for its performance, robustness, and ability to add custom extensions that accelerate their key applications.” said Dr. Charlie Su, President and CTO of Andes Technology. “By collaborating with MachineWare and S2C on this co-emulation approach, we’re giving them the ability to evaluate that impact and co-optimize their software stack and silicon architecture before committing to costly silicon tapeout.”

S2C: Bridging Virtual and Physical with Co-Emulation

S2C connects the SIM-V virtual platform to physical hardware through its Genesis Architect and Prodigy FPGA-based prototyping systems. In this hybrid setup, CPU models run in SIM-V while peripheral subsystems execute at high speed on FPGA, connected via a high-speed transactional bridge. This approach provides a realistic system context capable of running full software stacks—from bootloader to application—while retaining detailed debug visibility.

Key Use Cases & Customer Benefits

The joint solution supports multiple critical development stages:

  • Pre-silicon software development
  • Hardware/software co-verification
  • System performance analysis and tuning
  • Custom ISA extension development and debug

“Through co-emulation, our customers can accelerate time-to-market, reduce costs, and ensure software maturity—while benefiting from both cycle-accurate debugging and high-speed execution,” said Ying, VP of S2C. “But we can’t achieve this alone. We will continue to build on the high-performance advantages of hardware-assisted verification and work closely with our partners to collaboratively deliver shift-left solutions across the ecosystem.”

Looking Ahead

S2C, MachineWare, and Andes remain committed to advancing verification methodologies and providing scalable, efficient, and robust development tools for the RISC-V community. Together, the companies aim to strengthen the ecosystem for next-generation RISC-V chip design.

About MachineWare

MachineWare GmbH, headquartered in Aachen, Germany, is a leading provider of high-speed virtual prototyping solutions for pre-silicon software validation and testing. Its flagship platform, SIM‑V, delivers industry-leading RISC‑V simulation performance and extensibility, enabling accurate modeling of complex SoCs and custom ISA extensions. The company serves diverse sectors including AI, automotive, and telecommunications.

About Andes Technology

As a Founding Premier member of RISC-V International and a leader in commercial CPU IP, Andes Technology is driving the global adoption of RISC-V. Andes’ extensive RISC-V Processor IP portfolio spans from ultra-efficient 32-bit CPUs to high-performance 64-bit Out-of-Order multiprocessor coherent clusters. With advanced vector processing, DSP capabilities, the powerful Andes Automated Custom Extension (ACE) framework, end-to-end AI hardware/software stack, ISO 26262 certification with full compliance, and a robust software ecosystem, Andes unlocks the full potential of RISC-V, empowering customers to accelerate innovation across AI, automotive, communications, consumer electronics, data centers, and mobile devices. Over 17 billion Andes-powered SoCs are driving innovations globally. Discover more at www.andestech.com and connect with Andes on LinkedInX (formerly Twitter) , YouTube and Bilibili.

About S2C

S2C is a leading global supplier of FPGA prototyping solutions for today’s innovative SoC and ASIC designs, now with the second largest share of the global prototyping market. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 600 customers, including 11 of the world’s top 25 semiconductor companies, our world-class engineering team and customer-centric sales team are experts at addressing our customer’s SoC and ASIC verification needs. S2C has offices and sales representatives in San Jose, Seoul, Tokyo, Shanghai, Hsinchu, India, Europe and ANZ.

Also Read:

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China

Double SoC prototyping performance with S2C’s VP1902-based S8-100


Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs

Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs
by Daniel Nenni on 12-16-2025 at 8:00 am

Alphacore Presentation Spain 2025

In the evolving landscape of telecommunications, uncrewed aerial vehicles (UAVs) are emerging as innovative platforms for extending 5G networks, particularly in areas lacking terrestrial infrastructure. Dr. Jyrki T. J. Penttinen’s paper, presented at the First International Conference on AI-enabled Unmanned Autonomous Vehicles and Internet of Things for Critical Services (AIVTS 2025) in Barcelona, explores the design feasibility and link budget assessment of UAV-mounted 5G systems for Internet of Things and enhanced Mobile Broadband (eMBB) connectivity. As a Senior Program Manager at Alphacore Inc., Penttinen draws on his extensive background in cellular technologies, from network planning to 5G security, to propose a practical, commercial off-the-shelf solution for rapid, ad-hoc deployments.

The study addresses a critical gap in current research on UAV-assisted networking. While initiatives like AT&T’s 5G Cell on Wings demonstrate temporary coverage extension for disasters or events, existing studies often overlook trade-offs in UAV altitude, frequency bands, and real-world COTS components. Penttinen’s novelty lies in conceptualizing a 3GPP-defined Non-Public Network architecture tailored for aerial use. He evaluates a standalone NPN model, ideal for isolated, quick-to-deploy scenarios without mobile network operator dependency. This contrasts with public network-integrated variants, which could share RAN or core elements but introduce complexities in aerial contexts.

At the core of the design is a minimal viable 5G SNPN hosted on a single UAV, equipped with a lightweight gNB (e.g., Amarisoft Callbox Mini) for local UE connectivity. The system supports extensions to multi-UAV swarms via PC5 sidelinks, forming a mesh RAN. Equipment considerations emphasize feasibility: a 400-800g integrated small cell, 100-200g compute module like Raspberry Pi, and a 1.5-2kg battery pack, totaling 2.5-3kg payload—suitable for medium drones like DJI Matrice 300 RTK. Intelligence features, initially manual and GPS-assisted, could evolve to AI-driven UE-following based on signal heuristics.

Penttinen contrasts eMBB and IoT use cases through radio link budgets, highlighting their distinct parameters. eMBB targets high data rates (10 Mb/s–1 Gb/s) with wider bandwidths (20-400 MHz) and higher SNR (8-15 dB), while IoT prioritizes robust, low-power connectivity (50 b/s–1 Mb/s) using narrow bands (180 kHz–20 MHz) and negative SNR (-13 to -3 dB) with repetitions. Fade margins are lower for eMBB (3-5 dB) than IoT (10-15 dB), reflecting sensor placements in challenging environments.

Using ITU-R propagation models (P.525 and P.1411), the analysis quantifies performance in open/rural areas across low (1 GHz), mid (3.5/6 GHz), and high (24/28 GHz) bands, with UAV altitudes from 50m to 400m. For eMBB, path loss increases with altitude and frequency, limiting high-band viability beyond short ranges. At 50m altitude, high bands deliver up to 2 Gb/s near the UAV but drop sharply; mid-bands offer consistent 250-500 Mb/s. At 400m, low bands provide steady coverage for modest rates, while high bands become obsolete. Cell ranges exceed 1km for low/mid bands but shrink at higher altitudes due to earth curvature (radio horizon ~35km at 100m).

IoT scenarios fare better, with NB-IoT achieving maximum coupling loss of 134-136 dB, reaching horizons limited by geometry, not RF. LTE-M covers tens of kilometers at 1 GHz, while RedCap, with higher rates (150 Mb/s), is confined to single-digit km at 3.5 GHz. Overall, IoT outperforms eMBB in coverage, benefiting from narrowband and low SNR tolerances.

Bottom line: The results underscore trade-offs: frequency selection is pivotal for balancing coverage and capacity in aerial eMBB, while IoT enables vast areas with feasible parameters. Penttinen concludes that COTS-based UAV-5G is viable and cost-effective compared to terrestrial alternatives, paving the way for multi-UAV swarms and AI-optimized positioning. This work, acknowledging input from Arizona State University experts, highlights UAVs’ potential in critical services, from disaster response to remote IoT monitoring, as 5G adoption surges toward surpassing 4G by 2028.

Contact Alphacore here.

About Alphacore
Alphacore Inc., founded in 2012, is located in the innovative Silicon Desert of Arizona’s technology center and is known for its innovations in rigorous data conversion microelectronics. Our verified highspeed, low power data conversion IP products available on latest technology nodes optimize time-tomarket for demanding commercial or radiation tolerant specifications.

Our engineering and leadership team combines long histories of delivering innovative data converter, radio-frequency (RF), analog and mixed signal products, and complete imaging systems for critical systems, through business success at companies from multi-nationals to startups. Our design team includes seasoned “Radiation-Hardened-By-Design” (RHBD) experts, and we specialize in designing high performance converter microelectronics, and reliability or authentication tools for niche needs of demanding segments, including scientific research, aerospace, defense, medical imaging, and homeland security.

Also Read:

A Tour of Advanced Data Conversion with Alphacore

Alphacore at the 2024 Design Automation Conference

Analog to Digital Converter Circuits for Communications, AI and Automotive


A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design

A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design
by Mike Gianfagna on 12-16-2025 at 6:00 am

A Webinar About Electrical Verification – The Invisible Bottleneck in IC Design

Electrical rule checking (ERC) is a standard part of any design flow. There is a hidden problem with the traditional approach, however. As designs grow in complexity, whether full-custom analog, mixed-signal, or advanced-node digital, the limitations of traditional ERC tools are becoming more problematic. This can lead to missing subtle but dangerous electrical errors/failures. Aniah has developed a fundamentally new approach to ERC. Using the company’s ERC platform, every net is analyzed in all its electrical states and all electrical errors are detected early, automatically, and without tedious setup.

Aniah recently presented a webinar on the perils of traditional ERC and the benefits of its new approach. A very detailed analysis of the problem and the solution is presented, along with a live demonstration. If hidden ERC errors worry you, this is a must-see webinar. A replay link is coming but first let’s examine what is covered in a webinar about electrical verification – the invisible bottleneck in IC design.

The Webinar Presenters

The webinar is kicked off by Blandine Guivier – Aniah sales director for Europe. Blandine has over 12 years of experience in the semiconductor and EDA industries. She leads Aniah’s European business, helping semiconductor companies accelerate innovation through smarter electrical verification with Aniah’s OneCheck® platform.

After Blandine’s overview, Meryam Bouaziz conducts a live demonstration. She is an application engineer at Aniah. Meryam works with semiconductor companies to deploy Aniah’s OneCheck platform to improve the overall design reliability flow. Combining strong technical expertise, she worked as an analog design engineer in the beginning of her career for two years and then joined Aniah in 2023.

The demo is followed by a very useful Q&A session where questions from the live audience are answered. The entire event is under 40 minutes, so a lot of good information is covered very efficiently.

The Presentation

Blandine covers ERC fundamentals and the challenges presented by advanced designs. She then discusses Aniah’s OneCheck platform, covering adoption, breakthrough features and resulting design confidence. Regarding the fundamentals of ERC, the slide below, taken from the webinar summarizes what ERC is and why it’s important.

ERC Fundamentals
Verification Gap

Next, Blandine discusses how verification productivity has not kept up with design and manufacturing productivity, resulting in a verification productivity gap as shown in the diagram to the right. Blandine covers the details behind this gap during her presentation.

She then describes Aniah’s OneCheck as the industry’s first shift-left ERC solution. Blandine goes into a lot of detail regarding how OneCheck can make a significant impact in verification productivity and design confidence.

Some top-level points she begins with include:

  • The platform is easy to adopt and simple to use, without the need for tech files from the foundry
  • It empowers designers and CAD teams with unprecedented error coverage
  • And it delivers a drastic reduction in false errors

Blandine provides detailed information to show how OneCheck achieves these goals. Customer experience and comments are included, as well as details of how the tool is integrated into existing design flows. Details about the range of errors found by OneCheck is also provided. The breadth of coverage is impressive. Specifics of how OneCheck performs its unique analysis are also presented.

The Demonstration

Meryam then provides a live demonstration of OneCheck. The demo focuses on three main topics:

  • The ease of use of OneCheck
  • The ease of debug for the errors found
  • The quality of the coverage (reduction in false errors)

Only the circuit description is needed, no tech files so startup is indeed easy. Meryam selects a subset of errors to check for and runs the tool. She then goes through a detailed discussion of what errors were found, how to group them and how to determine the root cause. You will need to see this part for yourself to get a feeling for ease of use and clarity of results. I found it quite easy to follow the setup and analysis.

OneCheck is integrated with Cadence, so errors found can be cross-probed directly in Virtuoso. This clearly eases the debugging process.  Meryam examines several errors this way, performing analysis in OneCheck and cross-probing directly to Virtuoso. The process is quite impressive – you should see it for yourself.

To Learn More

If verification is getting more difficult for you as designs get larger and more complex, Aniah can help reduce this problem with its unique OneCheck ERC platform. You should definitely check it out. The time will be well-spent. You can access the webinar replay here. And that’s a webinar about electrical verification – the invisible bottleneck in IC design.

Also Read:

WEBINAR: Revolutionizing Electrical Verification in IC Design

Aniah at the 2025 Design Automation Conference #62DAC

Aniah and Electrical Rule Checking (ERC) #61DAC


Signal Integrity Verification Using SPICE and IBIS-AMI

Signal Integrity Verification Using SPICE and IBIS-AMI
by Daniel Payne on 12-15-2025 at 10:00 am

IBIS AMI min

High-speed signals enable electronic systems by using memory interfaces, SerDes channels, data center backplanes and connectivity in automobiles.  Challenges arise from signal distortions like inter-symbol interference, channel loss and dispersion effects. Multi-gigabit data transfer rates in High-Bandwidth Memory (HBM) and Double Data Rate (DDR) require equalization techniques like Continuous Time Linear Equalization (CTLE) and Decision Feedback Equalization (DFE) to ensure adequate eye openings.

Verifying high-speed links calls for SPICE accuracy using IBIS and IBIS-AMI models. For chip-level and block-level circuits you still need SPICE models. Siemens has developed a tool that simulates IBIS, IBIS-AMI, S-parameter interconnect and SPICE together, enabling high-speed link verification.

IBIS-AMI

With IBIS-AMI you can now simulate the equalization effects, CTLE and DFE for high-speed channel simulations.

The Rx IBIS-AMI data-flow model has algorithmic descriptions for each function: CTLE, DFE, Clock Recovery.

In the past you could simulate a SPICE netlist, S-parameter files and IBIS models with one simulator, then another separate tool for IBIS-AMI, creating multiple iterations. Now, it’s possible verify these circuits using Solido SPICE, reducing both iterations and verification times.

Solido Simulation Suite

Last year Siemens announced Solido Simulation Suite:

  • Solido SPICE
  • Solido LibSPICE
  • Solido FastSPICE

The Solido SPICE simulator supports IBIS-AMI models, IBIS, S-parameters, lossy coupled transmission line models and DSPF used for high-speed signaling projects. Verification using Solido SPICE shows nonlinear effects in AMI equalized eye openings in the presence of channel, I/O buffer, and power delivery.

DDR5

With Solido SPICE it’s possible to verify Chip-to-Chip (C2C) and Chip-to-Module (C2M) designs, where the chip is at SPICE-level, and the Rx receive-side chip has IBIS and IBIS-AMI models. For a DDR5 application the C2C verification of Read and Write cycles use either Tx or Rx modeled with IBIS and IBIS-AMI. Digital equalization is modeled with IBIS-AMI, then Solido SPICE simulates the mix of SPICE-level and all models.

SerDes

Shown below is the Rx eye diagram for an AMI processed output waveform on a 28Gbaud PAM4 C2C interface. The Tx is modeled at SPICE-level, PCB interconnects are included, dielectric losses and edge connector are modeled, and the receive-side Rx uses IBIS-AMI for the CTLE and DFE equalization.

NRZ and PAM4

Clock and Data Recovery (CDR) functions can be modeled with IBIS-AMI to support both Non-Return-to-Zero (NRZ) and Pulse Amplitude Modulation (PAM4) encoding. Solido SPICE enables SPICE-level verification of both high-speed DDR5 and SerDes chip designs using models for IBIS and IBIS-AMI, so that chips from other manufacturers can be simulated in your electronic system.

HyperLynx SI

With the HyperLynx SI tool a system designer can do signal integrity analysis at the system and board-level, all without having to be an SI expert. Users of Solido SPICE are getting shared IBIS-AMI technology from HyperLynx, ensuring consistent analysis results when alternating between circuit and system-level tools.

Summary

High-speed signaling requires an accurate verification approach. SPICE simulation is capable to capture non-linear effects down to the transistor-level. Therefore, supporting IBIS-AMI with SPICE to capture non-linear effects is crucial to such applications including SerDes, HBM and other applications with PAM4. Solido SPICE is able to verify these complex systems accurately and quickly.

Read the entire 10 page White Paper from Siemens online, Combining SPICE with IBIS-AMI: Solving advanced signal integrity verification challenges with Solido SPICE.

Related Blogs


WEBINAR: Why Network-on-Chip (NoC) Has Become the Cornerstone of AI-Optimized SoCs

WEBINAR: Why Network-on-Chip (NoC) Has Become the Cornerstone of AI-Optimized SoCs
by Admin on 12-15-2025 at 8:00 am

AION Silicon Arteris Webinar

By Andy Nightingale, VP of Product Management and Marketing

As AI adoption accelerates across markets, including automotive ADAS, large-scale compute, multimedia, and edge intelligence, the foundations of system-on-chip (SoC) designs are being pushed harder than ever. Modern AI engines generate tightly coordinated, data-intensive activity that places enormous stress on on-chip bandwidth and overall system efficiency. This pressure on data movement increasingly turns traditional interconnects into bottlenecks. Network-on-Chip (NoC) technology has emerged as an enhanced architectural solution that improves scalability, enabling teams to meet performance, power, and integration goals.

To address these challenges, teams are reevaluating the structure of on-chip communication, bringing physical considerations into NoC planning early in the flow, rather than treating physical implementation as a later, isolated step.

Arteris and AION Silicon recently partnered to present a webinar focused on physically aware SoCs and silicon-proven NoC deployments. The session, titled “Considerations When Architecting Your Next SoC: NoC,” is now available on demand and offers engineers practical, experience-based insights into NoC methodology, performance modeling, and real-world implementation.

Webinar Takeaways:

  • Deep Technical Insights Into AI-Driven SoC Requirements
    How emerging AI compute patterns shape data movement, coherence, and predictability—creating a clear need for scalable, high-performance NoCs.
  • Comprehensive Exploration of NoC Topology Choices
    How adaptable topologies align with floorplans and system objectives, and how topology decisions influence SoC behavior.
  • Physically Aware NoC Methodology
    How teams use Arteris FlexNoC to guide early architectural decisions, streamline integration, and achieve timing closure with predictable results.
  • Performance Modeling and KPI-Driven Analysis
    How modeling helps evaluate system-wide tradeoffs across compute, memory, and interconnect—ensuring decisions optimize whole-chip execution rather than isolated blocks.
  • Real Examples From Production-Class SoCs
    Case studies showing how advanced NoC design accelerates development, including comparisons between automated FlexGen flows and manual approaches.
  • Actionable NoC Deployment Best Practices
    A structured look at the complete NoC deployment process, including coherency strategies, power and clock coordination, and complexities introduced by multi-die architectures.
  • Strategic Competitive Advantages for Engineers
    How optimized NoCs improve design robustness and scalability, and how proven tooling and integration practices enable teams to move faster with greater certainty.
Why This Matters Now

Achieving cohesive, efficient SoCs depends on interconnect solutions that support both architectural goals and physical implementation realities. NoC technology provides the structured, scalable framework required to coordinate complex on-chip communication with confidence.

The on-demand webinar, featuring Arteris and AION Silicon, offers practical guidance based on production experience. Viewers will gain a clear understanding of how a disciplined NoC strategy strengthens system integration and improves predictability in advanced AI-driven SoCs.

If you’re planning an AI-focused chip, this is a session you won’t want to miss.

WATCH THE RECORDING

Presenters:

Andy Nightingale, VP of Product Management and Marketing at Arteris

Piyush Singh, Principal Digital SoC Architect at AION Silicon

Also Read:

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

Arteris Simplifies Design Reuse with Magillem Packaging

Arteris at the 2025 Design Automation Conference #62DAC


Quantum Computing Algorithms and Applications

Quantum Computing Algorithms and Applications
by Bernard Murphy on 12-15-2025 at 6:00 am

Quantum computer chip

In an upcoming Innovation blog we’ll get into how quantum computers are programmed. Here I’d like to look more closely at algorithms beyond Grover and Shor, and what practical applications there might be for quantum computing. I also take a quick look at what analysts are saying about potential market size. Even more than in AI, this is a field where it can be difficult to separate promise from reality, especially when core concepts are so alien to conventional compute ideas. I found my research on the topic helped me develop a somewhat clearer view.

Algorithms

First a nod to Grover’s and Shor’s algorithms, the best-known in this area. Grover’s algorithm searches a list for a candidate that best meets some objective, say largest number in a list. For an unsorted list, Grover’s method is quadratically faster than classical searches. Shor’s algorithm factorizes large integers and is exponentially faster than the best classical algorithms.

The Quantum Fourier Transform underlies the Shor algorithm and is conceptually similar to the classical discrete Fourier transform.

Don’t get carried away by exponential speedups. Many algorithms are no faster or only polynomially faster when run on quantum. I may get more into that in another blog in this series.

Quantum Phase Estimation is the foundation for eigenvalue (allowed state energies) estimation in quantum chemistry. The equivalent in classical computing would be a numerical solution to a differential calculus problem. Variational Quantum Eigensolvers (VQE) further extend these ideas to find ground state (minimum) energies in quantum chemistry and materials science. Quantum Approximate Optimization Algorithms (QAOA) are useful for combinatorial optimization problems such as the traveling salesman problem or selecting a subset of discrete options from a larger set as an input to drive classical optimization algorithms.

Applications

I can’t find any claim of production applications today. Looking forward, there are a few important considerations to factor into when we might see such applications. First, there is a clear trend to algorithms which rely on a hybrid of classical and quantum computation, where algorithms intentionally switch back and forth between classical and quantum stages. Classical is used for parts of an algorithm where no speedup is required and quantum is used for core components of the algorithm where speedup is very much required.

Second, there is a useful classification representing quantum compute today as an era of Noisy Intermediate Scale Quantum (NISQ) – 10-100+ qubits, allowing only for short coherence times. This era is expected to run through ~2030. The subsequent era of Fault Tolerant Quantum Computing (FTQC) should allow for 1000s of ideal qubits (with quantum error correction) and hours of hybrid computation time.

Third, quantum problems in chemistry, physics, and materials science are in many ways the easiest fit for this kind of computation and are therefore areas we might see fastest progress, though applications in high demand non-science domains could also be contenders.

Many claims seem to depend on FTQC, however a few that might be attainable in the NISQ era. Fourier transform algorithms on conventional computers are used extensively today in X-ray crystallography as guidance to materials and biotech molecular structure analysis. Such analysis does not scale in complexity as fast as direct molecular modeling, which is currently limited to small molecules.

An urgent demand that might accelerate development is quantum sensing against terrestrial magnetic field maps as a hack-proof alternative to GPS. There are also arguments for use in finance for portfolio design and credit scoring. Check out this video, starting at about 23:30 for a discussion of a portfolio optimization application.

Later applications are more ambitious, including a new approach to generating ammonia for agricultural fertilizer, improving battery design, better understanding of how drugs are metabolized in the body, and improving understanding of (nuclear) fusion reactions.

Analyst views on market size

McKinsey in 2024 projected that the total quantum technologies market (quantum computing, quantum communication and quantum sensing) could grow up to $97B by 2035, with quantum computing contributing between $28B and $72B of that number. They also project the quantum technologies market could be as large as $200B by 2040.

These are wide ranges and long time-horizons, not encouraging for early investors. BCG says that opportunities in the NISQ era are not panning out as well as they projected in their 2021 report, though they still see active growth from 2030 to 2040 and major growth beyond that point. Meantime they forecast a market for tech providers valued between $1B and $2B by 2030, the bulk of that coming from public sector and corporate investment.

My view for what it’s worth started as “never going to get out of university labs”, to thinking I might be completely wrong given the scale of public and private investment, now to somewhere in between. Finance and security/safety critical drivers like an alternative to GPS, together with technology advances in quantum error correction and hybrid flows, might accelerate progress at least in some applications, possibly within the NISQ era.

Failing dramatic breakthroughs, I am sure quantum accelerators will become important eventually, perhaps by 2040, though it is interesting that claims of supremacy often seem to supercharge algorithm advances on classical computers! Benefiting all markets, though not so good for quantum.

Also Read:

Superhuman AI for Design Verification, Delivered at Scale

The Quantum Threat: Why Industrial Control Systems Must Be Ready and How PQShield Is Leading the Defense

AI Deployment Trends Outside Electronic Design


imec on the Benefits of ASICs and How to Seize Them

imec on the Benefits of ASICs and How to Seize Them
by Daniel Nenni on 12-14-2025 at 2:00 pm

imec ASIC White Paper

In an era where product differentiation increasingly depends on performance, power efficiency, and form factor, Application-Specific Integrated Circuits (ASICs) have become the ultimate competitive weapon for innovative companies. Unlike off-the-shelf processors, FPGAs, or even ASSPs, a full- or semi-custom ASIC is engineered from the ground up (or from proven building blocks) for one specific application. The result: dramatically lower unit cost at volume, smaller size, lower power consumption, higher performance, better supply-chain control, and strong IP protection.

The whitepaper published by imec’s IC-Link division makes a compelling case that ASICs are no longer reserved for tech giants. Advances in design tools, IP reuse, multi-project wafer (MPW) shuttles, and mature foundry ecosystems have made custom silicon accessible even to startups and mid-sized companies.

Key advantages at a glance
  1. Cost efficiency at scale: While NRE can reach tens of millions for leading-edge nodes (e.g., ~$40 M for a 7 nm high-performance design), the per-die cost drops sharply. The paper shows a realistic 7 nm example where the ASIC unit price lands at $50–60 versus $90–100 for comparable commercial CPUs, delivering ROI of 1.26× over a nine-year lifecycle at 250 k units/year.
  2. Miniaturization & integration: Combining multiple functions into one die (or advanced SiP) shrinks the solution dramatically—critical for wearables, implants, and IoT devices.
  3. Ultra-low power: Shorter interconnects, optimized power management, and removal of unused blocks routinely cut power by 5–10× compared with FPGA or discrete implementations.
  4. Performance: Tailored datapaths, dedicated accelerators, and shorter signal paths deliver the highest throughput at the lowest energy.
  5. Supply-chain resilience & IP protection: A single chip replaces dozens of components from multiple vendors, eliminating obsolescence risks highlighted during the COVID shortage era. Reverse-engineering a modern ASIC is also prohibitively expensive.
Real-world proof points
  • Capri-Medical’s injectable migraine implant: reduced from a 3-hour surgery to a 20-minute outpatient procedure thanks to an ultra-small, multi-function ASIC.
  • Wiyo’s battery-less Wi-Fi-powered smart tags: only an ASIC could meet the microwatt power budget while harvesting energy from ambient 2.4 GHz signals.
  • Frontgrade Gaisler’s radiation-hardened space processors and Arm’s Morello secure SoC: both rely on custom silicon to satisfy extreme reliability and performance requirements that no commercial part can match.

The ASIC journey demystified The whitepaper walks readers through every phase: from initial feasibility and system architecture, through detailed ASIC specification, RTL-to-GDSII design (digital) and full-custom layout (analog), co-design of chip and advanced package, mask making, assembly, test development, qualification, and finally volume production. IC-Link emphasizes early package co-design and the importance of design-for-test (DFT) to avoid costly test escapes.

Why partner with IC-Link? As imec’s dedicated ASIC service arm, IC-Link offers two flexible business models:

  • Full Turnkey (low risk): one single point of contact from specification to delivered tested parts.
  • Customer-Owned Tooling (lowest cost): customers select only the services they need while retaining full control.

With direct access to TSMC, GlobalFoundries, UMC, and specialty processes (including imec’s own radiation-hard libraries), plus decades of packaging and test expertise, IC-Link has supported everything from 180 nm medical implants to 7 nm memory PHYs and complex 18-layer security SoCs.

Bottom Line: For any company facing size, power, cost, or supply-chain constraints—and targeting volume production—an ASIC is no longer a “nice-to-have.” It is rapidly becoming a strategic necessity. The imec IC-Link whitepaper convincingly shows that the barriers to entry have fallen: the expertise, tools, and manufacturing capacity now exist to bring custom silicon within reach of virtually any serious innovator. The question is no longer “Can we afford an ASIC?” but “Can we afford not to have one?”

Also Read:

Revitalizing Semiconductor StartUps

Podcast EP320: The Emerging Field of Quantum Technology and the Upcoming Q2B Event with Peter Olcott

Live Webinar: Considerations When Architecting Your Next SoC: NoC with Arteris and Aion Silicon


CEO Interview with Eelko Brinkhoff of PhotonDelta

CEO Interview with Eelko Brinkhoff of PhotonDelta
by Daniel Nenni on 12-12-2025 at 1:00 pm

Eelko Brinkhoff PhotonDelta

In 25 years of working in economic development, Eric gained a lot of experience and knowledge in the field of Foreign Direct Investments (FDI), internationalisation of SME’s, innovation cooperation and economic development. I have built a strong network in the Netherlands and abroad towards business, government, knowledge institutes and universities.

In his role as CEO of PhotonDelta the challenge is to mature the organisation after a period of rapid growth and to become an internationally recognised accelerator for the photonic chip industry. PhotoDelta plays a key role in making the integrated photonics ecosystem indispensable for the goals and challenges that we face today. Photonic chips will become critical in various applications such as quantum computing, robotics, sustainable agriculture and autonomous driving. PhotonDelta as a Dutch world leading ecosystem will be a driving force to make this happen.

Tell us about your organisation.

PhotonDelta is a non-profit organisation supporting an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions that contribute to a better world. We do so by creating global awareness and promoting the benefits and potential of the Dutch and European photonic chip industry and its technologies. Leveraging funding from the National Growth Fund, alongside strategic investments, we catalyse the acceleration of the photonic chip industry.

What problems are the companies that you work with solving?

The PhotonDelta Ecosystem is an end-to-end value chain for photonic chips that designs, develops, and manufactures innovative solutions that contribute to a better world. The ecosystem is at the very forefront of photonic chip research, pioneering new products and solutions.

What application areas are you seeing the most exciting developments in?

Right now, we see adoption in markets like Datacom to be able to send more data while using less energy (think AI-demand), sensing solutions for healthcare diagnostics, and photonic chips for quantum computing at room temperature.

What keeps businesses in this industry up at night?

Companies in this industry often worry about how to scale fast enough to deliver affordable chips without sacrificing quality or efficiency. That pressure is amplified by the need to secure necessary funding to support expansion. Businesses also struggle with a complex regulatory environment that can slow progress and increase costs. On top of that, finding and retaining the specialized talent required for chip design, engineering, and manufacturing remains a major challenge.

How do companies normally engage with your organisation?

We support an ecosystem of more than 70 startups and scale-ups, where we support them with programmes on Talent, Tech, Funding, and Internationalisation. Via our internationalisation effort, we initiate, guide, and support new partnerships in business development and technology cooperation in key markets in North America, Europe, and Asia

What success have companies within your ecosystem seen recently?

Companies across the PhotonDelta ecosystem have seen a wave of meaningful progress recently, reflecting both technological maturity and growing commercial traction. Several startups have advanced from promising R&D into concrete milestones. Recent examples of success include Aluvia Photonics, which secured new funding that will allow the company to expand its aluminium-oxide photonic integrated circuit technology and accelerate collaboration with partners throughout the ecosystem. The photonics specialists at Surfix have been working with leading oncologists at the world-renowned NKI (Netherlands Cancer Institute), to create a photonics-based point-of-care testing platform that’s helping save lives today from hypercortisolism or Addison’s Disease. And in another example, PHIX has partnered with Ligitek, Leverage, and ITRI  to develop next-generation high-speed and energy-efficient optical transceivers to address global challenges in data connectivity. This advancement in high-speed optical engines strengthens the Netherlands-Taiwan collaboration and prepares new semiconductor packaging innovations for scalable volume manufacturing. 

You can find more success stories here – https://www.photondelta.com/news/

What’s next for the industry?

The industry is heading into a phase of accelerated growth driven by stronger public and private investment, including initiatives like a potential EU Chips Act 2.0. This funding will be key to reducing PIC production costs and simplifying packaging, both essential for wider market adoption. At the same time, photonics technologies are set to expand rapidly in sustainability-focused sectors such as food, health, and energy, where real-world demand is increasing.

To keep pace, companies will need to deepen their capabilities in hybrid integration, quantum-ready technologies, and scalable design tools that streamline development. Equally important is cultivating a global talent pool and strengthening alignment between international ecosystems to avoid fragmentation. Finally, the industry will continue pushing for shared design and manufacturing standards, enabling greater compatibility across sectors and faster time-to-market.

Overall, the next phase will be defined by scaling through investment, better technology platforms, coordinated talent development, and standards that support broad commercial deployment.

Contact Photon Delta

Also Read:

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors

CEO Interview with Brandon Lucia of Efficient Computer

CEO Interview with Dr. Peng Zou of PowerLattice


Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur

Podcast EP322: A Wide-Ranging and Colorful Conversation with Mahesh Tirupattur
by Daniel Nenni on 12-12-2025 at 10:00 am

Daniel is joined by Mahesh Tirupattur, chief executive officer at Analog Bits. Mahesh leads strategic planning to develop and implement Analog Bits’ vision and mission of enabling the silicon digital world with interfacing IP to the analog world. Additionally, Mahesh oversees all aspects of Analog Bits’ operations to ensure efficiency, effectiveness, and financial security while maintaining strong relationships with key stakeholders, customers, and employees.

In this far-reaching discussion, Mahesh begins with an overview of some key Analog Bits accomplishments for 2025. He spends some time on the company’s relationship with TSMC, including the awards Analog Bits has won over the years and the latest 2 and 3nm IP. He describes in some detail the five joint papers Analog Bits presented at the TSMC OIP event with high-profile partners.

Dan also discusses the Intelligent Power Architecture with Mahesh, who explains what it is and how it impacts chip design. Analog Bits Pinless IP is also explored as Mahesh describes how it works and where it becomes very useful for dense, advanced designs. How Analog Bits power sensors are enabling the broad deployment of AI is also discussed.

Dan ends the conversation by exploring Mahesh’s recent presentation that explains how analog designers and winemakers are similar.

Contact Analog Bits

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.