wide 1

Digital Implementation and AI at #62DAC

Digital Implementation and AI at #62DAC
by Daniel Payne on 08-04-2025 at 10:00 am

aprisa at #62dac

My first panel discussion at DAC 2025 was all about using AI for digital implementation, as Siemens has a digital implementation tool called Aprisa  which has been augmented with AI to produce better results, faster. Panelists were from Samsung, Broadcom, MaxLinear, AWS and Siemens. In the past it could take an SoC design team 10 to 12 weeks to reach timing closure on a block, but now it can be done in 1-2 weeks with Aprisa AI

Using Aprisa AI has also improved the compute time efficiency by 3X, providing a 10% PPA improvement while beating the old way of writing expert scripts. Here’s my take on  the interactive panel discussion.

AI used in EDA tool flows at DAC in 2025 was quite a popular theme, and it helps to meet the challenges of complex ASICs that have multiple power domains, 2.5D and 3D chip designs and even in planning before implementation. The cost to manufacturing designs has doubled just in the past two nodes, so there’s a need to be more efficient and have chips consume less energy.

One technique to speed up verification is using a chatbot to create test benches and suites, as natural language queries are quicker than manually writing UVM. The engineering shortage is impacting SoC designs and even training new engineers takes valuable resources, so AI is helping out by shortening the learning curve with EDA tools and make experienced engineers more productive.

AI is being used to make early tradeoff explorations possible, resulting in improvements in PPAT. A new hire can be trained using AI with natural language in about one month, instead of the old way taking six months. Even variants of a design can be done more quickly with AI in the flow with fewer engineers than before.

Before AI usage in EDA flows design teams couldn’t take on all the projects that they wanted to, because of the lack of engineering resources, and with 3nm chip designs costing $300M the pressure is on to get first silicon working. Previous design cycles of 12-18 months are now possible to compress into 6-9 month cycles, fueled by AI-based tools.

Our semiconductor industry has a market size of $650 billion today, projected to reach $1T by 2030, when we expect to see systems with 1 trillion transistors, aided by AI taking on many of the routine engineering tasks like optimizing EDA tool runs.

Agents are poised to enter EDA flows, further improving efficiency and productivity of design and verification teams. Agents will do optimizations and agentic AI will help to solve some complex problems, finding new solutions.  These optimizations need to be accurate enough to be relied upon. Humans will still focus on the architectural tradeoffs for a system.

EDA design and verification in the cloud has taken off in the past three years, as . We can expect to see AI agents doing placement and routing, maybe even improve timing closure tasks. Verification agents can help today by analyzing and even removing human-induced errors.

AI usage is driven both from the top-down and bottom-up in organizations, as managers and engineers discover and benefit from AI efficiencies and improvements. Learning how to prompt an LLM for best results is a new engineering skill. Reports and emails are benefiting from the use of ChatGPT.

Larger companies that train their own LLM will have an advantage over smaller companies, simply because their models are larger and smarter. We still need human experts to validate all AI results for correctness. EDA companies that have created LLMs report rapid improvements in the percentage of correct answers.

Reaching power goals is possible with AI, and the Aprisa tool from Siemens is showing 6-13% improvements. Engineers don’t have to be Aprisa tool experts to get the best results, as AI decides which tool setting combinations produce the best results.

Bigger, more complex SoC projects see more benefit from AI implementation tools, as it chooses the optimal tool settings based on machine learning. Full-custom IC flows are also reporting benefits from AI-based flows. Aprisa is working on how to do custom clock tree generation through a natural language interface, and there’s currently a cockpit to invoke natural language. Aprisa AI results are showing 10X productivity, 10% better PPA, with up to a 3X improvement in compute time efficiency.

Summary

Full Agentic flows are the long-term goal for EDA tools and AI today is helping improve full-custom IC design and big digital design implementation. Engineers need to adapt to the use of AI in their EDA tool flows, learning the best prompts. With new efficiencies it is possible to have fewer engineers that are more productive than their predecessors. EDA customers want the option to use their own LLMs or change LLMs as they see fit in their tool flows.

Related Blogs


Synopsys Webinar – Enabling Multi-Die Design with Intel

Synopsys Webinar – Enabling Multi-Die Design with Intel
by Mike Gianfagna on 08-04-2025 at 6:00 am

Synopsys Webinar – Enabling Multi Die Design with Intel

As we all know, the age of multi-die design has arrived. And along with it many new design challenges. There is a lot of material discussing the obstacles to achieve more mainstream access to this design architecture, and some good strategies to conquer those obstacles. Synopsys recently published a webinar that took this discussion to the next level. The webinar began with an overview of multi-die design and its challenges, but then an Intel technologist weighed in on what he’s seeing and how the company is collaborating with Synopsys.

The experience of a real designer is quite valuable when discussing new methodologies such as multi-die design and this webinar provides that perspective. There are great insights to be gained. A replay link is coming but first let’s take a big picture view of this Synopsys webinar – enabling multi-die design with Intel.

The Synopsys Introduction

Amlendu Choubey

The webinar begins with a short but comprehensive context setting from Amlendu Shekhar Choubey, Senior Director, Product Management at Synopsys. He manages the 3DIC Compiler platform and has over 20 years of experience in EDA, semiconductor IP, and advanced packaging, with a strong background in product management, product strategy, and strategic partnerships. Amlendu has expertise in package-board software, including AI-driven design solutions, cloud-based services, and driving growth in emerging markets. He holds an MBA from UC Berkeley’s Haas School of Business and a B. Tech in Electrical Engineering from IIT Kanpur.

Amlendu began with an eye-catching chart depicting the impact AI has had on the size of the semiconductor market. Another sobering prediction is that 100% of Al chips for data centers will be multi-die designs. The chart is shown below.

He concluded his presentation and set the stage for what followed with an overview of the Synopsys multi-die design solution, focusing more on Synopsys 3DIC Compiler exploration-to-signoff paltform. The goal of this approach is to efficiently create, implement, optimize, and close in one place. The platform is depicted in the chart below.

Multi-Die Design Methodology

Now, let’s look at some brief highlights of comments from Intel.

Intel Presents: Modern EDA Solutions for Scalable Heterogeneous Systems

Vivek Rajan

This portion of the webinar was presented by Vivek Rajan, Senior Principal Engineer at Intel. Vivek has over 25 years of experience in digital design methodology, chip integration, technology, and 3DIC system co-optimization. Vivek received his bachelor’s degree in electrical engineering from IIT Kharagpur, India and his master’s degree in electrical systems engineering from University of Connecticut. Vivek actively raises awareness and drives innovation for emerging shifts in chip Integration and systems design. As an invited speaker, Vivek has delivered several technical presentations at industry conferences.

Vivek began by saying that, “It is a great pleasure to present this webinar on multi-die challenges and opportunities … and what we have done collaborating with Synopsys for many years.” Vivek’s presentation outline includes:

  • Executive Summary
  • Multi-Die Challenges and Opportunities
  • Generational Collaboration Between Intel and Synopsys for Multi-Die Solutions
  • Peeking Ahead: Core Folding

Vivek discussed some of the unique challenges of managing and configuring die-to-die IP and how Intel has approached this challenge. He then goes into substantial detail on the many planning requirements for 3D IC design. He discusses the many focus areas of collaboration between Intel and Synopsys which are summarized below.

Intel/Synopsys Collaboration Focus Areas

The details of the 3D IC planning and implementation workflows being developed at Intel are presented. Vivek also goes into detail regarding core folding, an approach to partitioning and layout of 3D designs.

He concludes with the following points:

  • EDA tool capabilities are essential enablers for Multi Die Designs
  • Our (INTC/SNPS) collaboration has been fruitful for Intel & ecosystem!
  • Early Design Prototype enablement is paramount for decision making
  • Today, tool features for 3DIC Construction & assembly are fully available
  • Next step is full automation for Core Folding and Scale

To Learn More

A webinar that highlights a real designer’s perspectives and experiences is quite valuable. If multi-die design is in your future, seeing what Intel is doing can be quite useful.

You can access the webinar here: Intel Presents: Modern EDA Solutions for Scalable Heterogeneous Systems. And that’s the Synopsys webinar – enabling multi-die design with Intel.

Also Read:

cHBM for AI: Capabilities, Challenges, and Opportunities

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies


Is a Semiconductor Equipment Pause Coming?

Is a Semiconductor Equipment Pause Coming?
by Robert Maire on 08-03-2025 at 10:00 am

John Maire SemiWiki

– Lam put up good numbers but H2 outlook was flat with unknown 2026
– China remains high & exposed at 35% of biz while US is a measly 6%
– Unclear if this is peak, pause, digestion, technology or normal cycle
– Coupled with ASML soft outlook & stock run ups means profit taking

Nice quarter but expected given stock price

Lam reported revenues of $5.17B with gross margins of 50.3% and non-GAAP EPS of $1.33, at the high end and a slight beat.

Outlook for current quarter is $5.2B+-$300M and $1.20+-$0.10.

Lam talked about the second half being flat with the first half and unclear 2026 outlook so far…..somewhat echoing ASML….

China 35%…US 6% of business

China remains the both the biggest customer and the biggest exposure at 35% of business. Korea is a distant second at 22%, Taiwan 19%, Japan 14% and the US a distant, miniscule 6%.

Given that China is outspending the US by a ratio of 6 to 1, we see no way that the US could ever catch up or even come close to China.

This clearly shows that whatever efforts the US government is making to have a semiconductor comeback, its obviously failing to do so.

This remains a large exposure to the current trade issues that are still not settled with China.

This red flag will continue for the near and medium term.

Profit taking given stock run up in the face of slowing outlook & uncertainty

Lam’s stock was off in the aftermarket as well as during the normal session as the good quarter doesn’t out weigh the soft outlook and China exposure.

With the amount we have seen the semiconductor equipment stocks run up on the AI tidal wave, it’s clear that the stocks, including Lam, have gotten ahead of themselves and reality.

Although AI is still huge, the rest of the chip industry and equipment specifically doesn’t deserve the run up as non AI related business is just so-so at best.

The stocks

AMAT, KLAC & ASML have a similar profile and will be similarly weak.

We don’t see a change in momentum any time soon and may have an overall flattish outlook coupled with risk associated with trade and global politics which could dampen that flat outlook.

Its important to remember that chip equipment stocks are somewhat disconnected from the likes of NVDA and TSMC as AI continues to do well .

The recent Samsung/Tesla news doesn’t help equipment stocks much and obviously hurts Intel and the outlook for US related chip spend

Taking money off the table in equipment names seems prudent given what we have heard so far……..

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary


CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices
by Daniel Nenni on 08-03-2025 at 6:00 am

Avi Madisetti Headshot

Avi Madisetti is the CEO and Founder of Mixed-Signal Devices, a fabless semiconductor company delivering multi-gigahertz timing solutions. A veteran of Broadcom and Rockwell Semiconductor, Avi helped pioneer DSP-based Ethernet and SerDes architectures that have shipped in the billions. He later co-founded Mobius Semiconductor, known for ultra-low power ADCs, DACs, and transceivers used in commercial and defense systems. At Mixed-Signal Devices, Avi is now advancing femtosecond-level jitter and scalable CMOS architectures to power next-gen AI datacenters, 5G infrastructure, and automotive platforms.

Tell us about your company.

At Mixed-Signal Devices, we’re reinventing timing for the modern world. From AI data centers to radar, 5G base stations to aerospace systems, today’s technologies demand timing solutions that are not only ultra-fast but also programmable, scalable, and rock-solid under extreme conditions. That’s where we come in.

We’re a new kind of timing company, founded by engineers who have built foundational technologies at companies like Broadcom. We saw that conventional clock architectures—especially legacy quartz and analog PLL-based designs—were no longer scaling with system demands. We created something different: a digitally synthesized, CMOS-based timing platform that combines the precision of crystals with the flexibility of digital design.

Our patented “Virtual Crystal” architecture enables multi-gigahertz performance with femtosecond-level jitter and sub-Hz frequency programmability. It’s all built on silicon, optimized for integration, and designed to simplify clock architectures from day one.

What problems are you solving?

Modern electronic systems are running faster, hotter, and more complex than ever. Whether you’re trying to scale a GPU fabric in an AI data center or coordinate coherent RF signals in a phased array radar, timing precision becomes the bottleneck. Traditional clocking solutions weren’t built for this world.

We solve that by eliminating the analog limitations. Our all-CMOS digital synthesis platform delivers low-jitter, low-phase-noise clocks at up to 2 GHz, without bulky crystals or noisy PLLs. And because we built our own DAC architecture and waveform engine, we’ve eliminated the spurs and drift that plague conventional solutions.

Whether it’s deterministic synchronization across a rack, reference clock cleanup for PCIe or SerDes, or generating clean LOs for high-speed converters, our portfolio is built to meet the needs of engineers building the world’s most advanced systems.

What are your strongest application areas?

We’re seeing strong traction in four key segments:

  1. AI Infrastructure – Our clocks and synthesizers support ultra-low jitter and precise synchronization for GPU/CPU boards, optical modules, SmartNICs, and PCIe networks.
  2. Wireless Infrastructure and 5G/6G – Our jitter attenuators and oscillators provide reference cleanup and deterministic timing for fronthaul/midhaul networks.
  3. Defense and Radar – Our RF synthesizers with phase-coherent outputs are ideal for beamforming, MIMO, and SAR systems.
  4. Test & Measurement / Instrumentation – Engineers love our digitally programmable, wideband signal sources for high-speed converter testing and system prototyping.
What keeps your customers up at night?

They’re building faster systems with tighter timing and synchronization margins—and legacy clocking just isn’t cutting it. As Ethernet speeds scale to 800G and 1.6T, and new modulation schemes like PAM6 and PAM8 take hold, they’re running into noise, jitter, and skew problems that conventional architectures can’t overcome.

They also worry about integration and supply chain predictability. We address both by delivering clock products that are smaller, programmable, and available in standard CMOS packages. That means fewer components, easier integration, and better reliability—even across temperature and voltage swings.

How do you differentiate from other timing companies?

Mixed-Signal Devices is the first company to combine the best of digital synthesis, high-performance DACs, and BAW-based timestamping into a single, scalable clocking platform. Our “Virtual Crystal” concept gives you phase noise commensurate with high-frequency fundamental mode crystals, crystal-like stability, but with digital programmability and sub-Hz resolution. And our femtosecond jitter performance rivals—and in many cases exceeds—the best quartz and PLL-based solutions.

We’re not retrofitting old designs. We built our architecture from the ground up to meet modern demands. That means our products are clean, simple, and powerful—ideal for engineers who don’t want to patch together three chips when one will do.

What new products are you most excited about?

We just launched the MS4022 RF Synthesizer, a digitally programmable wideband source with output up to 22 GHz and jitter as low as 25 fs RMS. It’s phase-coherent, and can lock to anything from a 1 PPS GPSDO to a 750 MHz reference. It’s a game-changer for radar, wireless, and test equipment.

We’ve also introduced the MS1130 and MS1150 oscillators and MS1500/MS1510 jitter attenuators, supporting frequencies up to 2 GHz and jitter as low as 19 fs. These are already being evaluated in AI compute fabrics and 5G radio access networks. Everything is built on our same core architecture—clean signals, robust programmability, and compact form factors.

How do customers typically engage with your company?

We work closely with design teams, often from first concept through final product. Our solutions are used by some of the most advanced engineers in radar, compute, networking, and defense, and they’re looking for a partner who understands both the signal chain and the system-level challenges.

We also work through select distributors and field engineers, so customers can get hands-on support quickly and scale into volume smoothly. Whether it’s early-stage sampling or joint product validation, we aim to be a true technical partner, not just a vendor.

How do you see timing evolving, and what role will Mixed-Signal Devices play?

Timing is becoming the next system bottleneck. As systems scale to higher speeds (for example 1.6T networking), timing solutions must become faster, cleaner, and more deterministic. Legacy analog solutions can’t keep up. Mixed-Signal Devices is creating a new category of timing, one that’s digital at its core, programmable by design, and scalable with Moore’s Law. We believe the future of timing is fully synthesized, digitally defined, and built to unlock the next generation of compute, communications, and autonomy. That’s the future we’re building, and we’re just getting started.

Also Read:

CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Jutta Meier of IQE

Executive Interview with Ryan W. Parker of Phononic Inc.


Alphawave Semi and the AI Era: A Technology Leadership Overview

Alphawave Semi and the AI Era: A Technology Leadership Overview
by Daniel Nenni on 08-03-2025 at 6:00 am

AI Market Silicon Forecast 2025

The explosion of artificial intelligence (AI) is transforming the data center landscape, pushing the boundaries of compute, connectivity, and memory technologies. The exponential growth in AI workloads—training large language models (LLMs), deploying real-time inference, and scaling distributed applications—has resulted in a critical need for disruptive innovation. Alphawave Semi has emerged as a significant player positioned at the intersection of this transformation, bringing expertise in high-speed connectivity and semiconductor IP to a rapidly evolving AI ecosystem.

AI workloads have escalated data traffic, straining every layer of compute infrastructure. OpenAI data suggests compute demands have doubled every 3 to 4 months since 2012, outpacing Moore’s Law. LLMs such as GPT-4, with trillions of parameters, exemplify this trend. The pressure has shifted from not only building faster compute but also enabling higher bandwidth, lower latency, and more energy-efficient interconnects between CPUs, GPUs, memory, and storage.

This demand for scale and speed has coincided with the rise of heterogeneous computing architectures. Data centers increasingly rely on systems combining CPUs with accelerators like GPUs, ASICs, and FPGAs, tailored for specific AI tasks. At the same time, traditional monolithic SoCs have reached the limits of manufacturable die sizes, prompting a transition to chiplet-based architectures. Chiplets allow integration of best-in-class components with shared power, memory, and logic, enabling modular design and more efficient scaling.

To meet these demands, Alphawave Semi has transformed from a SerDes IP provider into a broader semiconductor solutions company. Its transition began with deep investments in advanced packaging, custom silicon design, and chiplet technology. With roots in high-speed serial interfaces, the company is uniquely positioned to deliver low-power, high-performance interconnects essential for AI data center workloads.

Alphawave Semi’s IP portfolio includes cutting-edge SerDes capable of supporting data rates above 112G, which are crucial for enabling chiplet interconnects, optical transceivers, and PCIe/CXL-based memory fabrics. It supports the emerging Universal Chiplet Interconnect Express (UCIe) standard, a critical development that enables interoperability of chiplets across vendors. This fosters a multi-vendor ecosystem, empowering smaller silicon designers to compete by assembling chiplets into innovative AI processors.

In parallel, memory bottlenecks have become a major challenge. High Bandwidth Memory (HBM) and on-die memory solutions have become integral to AI accelerator performance. Alphawave Semi’s engagement in chiplet-based memory interfaces and its roadmap for integrating CXL-based memory pooling support underline its strategy to address next-gen memory hierarchies.

Alphawave Semi has also expanded into standard products and custom silicon development. In 2023, the company launched a rebrand to reflect its transition from IP licensing to full-stack semiconductor innovation. This includes providing front-end and back-end design, verification, and manufacturing services—an offering increasingly valuable as cloud and hyperscale customers seek to build custom silicon solutions to meet their unique AI performance requirements.

Industry partnerships have further amplified Alphawave’s reach. The company collaborates with key foundry and IP ecosystem leaders such as TSMC, Samsung, ARM, and Intel. It has also signed agreements with AI chip startups like Rebellions, signaling its growing role as an enabler of next-generation compute architectures.

As demand for AI infrastructure continues to grow, Alphawave Semi’s value proposition is becoming clearer: delivering foundational connectivity IP, scalable chiplet technologies, and full custom silicon solutions for customers at every tier of the semiconductor value chain. Its strategy aligns with the trajectory of the AI silicon market, projected to exceed $150 billion by 2027, driven by both inference at the edge and large-scale training in data centers.

In summary, Alphawave Semi stands at a critical juncture in the AI revolution. Its combination of deep IP expertise, chiplet innovation, and customer-centric silicon services positions it as a key enabler of the high-speed, heterogeneous systems powering AI’s future.

You can read the full white paper here.

Also Read:

Podcast EP288: How Alphawave Semi Enables Next Generation Connectivity with Bharat Tailor

Alphawave Semi is in Play!

Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano


Materials Selection Methodology White Paper

Materials Selection Methodology White Paper
by Daniel Nenni on 08-02-2025 at 6:00 am

Materials Selection Methodology ANSYS

The Granta EduPack White Paper on Materials Selection, authored by Harriet Parnell, Kaitlin Tyler, and Mike Ashby, presents a practical and educational guide to selecting materials in engineering design. Developed by Ansys and based on Ashby’s well-known methodologies, the paper outlines a four-step process to help learners and professionals select materials that meet performance, cost, and functional requirements using the Granta EduPack software.

The materials selection methodology begins with translation, the process of converting a design problem into precise engineering terms. This involves four components: function, constraints, objectives, and free variables. The function describes what the component must do—such as support loads, conduct heat, or resist pressure. Constraints are strict conditions that must be met, such as minimum strength, maximum service temperature, or corrosion resistance. Objectives define what is to be minimized or maximized—typically weight, cost, energy loss, or thermal conductivity. Finally, free variables are parameters the designer is allowed to adjust, such as the material choice itself or geometric dimensions. Defining these clearly is essential for identifying suitable materials later in the process.

The second step is screening, which eliminates materials that do not meet the basic constraints identified during translation. If a material doesn’t meet the required temperature, stiffness, or conductivity, it is screened out. Screening can be done manually by checking material databases, but the Granta EduPack software provides tools for a more visual approach. Using property charts with logarithmic scales, users can apply filters and quickly identify which materials fall outside the necessary limits. These visualizations make it easier to compare large material datasets and help narrow down potential candidates.

After unsuitable options are removed, the ranking step evaluates the remaining materials based on how well they meet the design objectives. This involves using performance indices, which are combinations of material properties that reflect the overall performance for a given function. For instance, if the goal is to design a lightweight and stiff beam, the relevant performance index could be the square root of Young’s modulus divided by density. The better this index, the more suitable the material. These indices can be plotted on property charts within EduPack to show which materials perform best. Materials above the selection line, or toward a defined optimal region, are considered the top choices.

The final step is documentation, where the designer further investigates the top candidates. Even if a material performs well according to data, real-world concerns like manufacturing limitations, environmental impact, availability, and historical reliability must also be considered. This step emphasizes broader engineering judgment and the importance of context in final decision-making.

Following the methodology section, the white paper explains how performance indices are derived. They come from the performance equation, which relates the function of the component, its geometry, and the material properties. If the variables in the equation can be separated into those three groups, the material-dependent part becomes the performance index. This index can then be used universally across different geometries and loading scenarios, simplifying the selection process early in design.

Two examples demonstrate how performance indices are formed. In the first, a thermal storage material must store maximum heat per unit cost. The index becomes heat capacity divided by material cost. In the second, a beam must be light and stiff under bending. The derived performance index combines modulus and density. These examples show how specific requirements and constraints lead to practical, optimized material choices.

Granta EduPack supports these concepts through its interactive features. Users can plot performance indices as selection lines with defined slopes on charts or use indices as axes to rank materials visually. The Performance Index Finder tool automates the index derivation process by letting users input their function, constraints, objectives, and free variables directly. The software then produces a relevant performance index and displays suitable materials accordingly.

The paper concludes with a list of references and educational resources. Ashby’s textbook Materials Selection in Mechanical Design is cited as the foundational source. Additional resources include Ansys Innovation Courses, video tutorials, and downloadable case studies focused on mechanical, thermal, and electromechanical applications. These are intended to reinforce the material and support both independent learning and classroom instruction.

In summary, this white paper offers a clear, structured, and practical approach to materials selection. It not only teaches the methodology behind choosing the right materials but also integrates powerful software tools that make the process faster and more intuitive. By combining theoretical rigor with real-world practicality, the Granta EduPack methodology equips students and engineers with the skills to make informed, optimized, and sustainable material choices.

You can download the paper here.

Also Read:

ML and Multiphysics Corral 3D and HBM

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

Ansys and eShard Sign Agreement to Deliver Comprehensive Hardware Security Solution for Semiconductor Products

 


Formal Verification: Why It Matters for Post-Quantum Cryptography

Formal Verification: Why It Matters for Post-Quantum Cryptography
by Daniel Nenni on 08-01-2025 at 10:00 am

Formal Verification Why does it matter for PQC

Formal verification is becoming essential in the design and implementation of cryptographic systems, particularly as the industry prepares for post-quantum cryptography (PQC). While traditional testing techniques validate correctness over a finite set of scenarios, formal verification uses mathematical proofs to guarantee that cryptographic primitives behave correctly under all possible conditions. This distinction is vital because flaws in cryptographic implementations can lead to catastrophic breaches of confidentiality, integrity, or authenticity.

In cryptographic contexts, formal verification is applied across three primary dimensions: verifying the security of the cryptographic specification, ensuring the implementation aligns precisely with that specification, and confirming resistance to low-level attacks such as side-channel or fault attacks.

The first dimension involves ensuring that the design of a cryptographic primitive fulfills formal security goals. This step requires proving that the algorithm resists a defined set of adversarial behaviors based on established cryptographic hardness assumptions. The second focuses on verifying that the implementation faithfully adheres to the formally specified design. This involves modeling the specification mathematically and using tools like theorem provers or model checkers to validate that the code behaves correctly in every case. The third area concerns proving that the implementation is immune to physical leakage—such as timing or power analysis—that could inadvertently expose secret data. Here, formal methods help ensure constant-time execution and other safety measures.

Formal verification also contributes to broader program safety by identifying and preventing bugs like buffer overflows, null pointer dereferencing, or other forms of undefined behavior. These bugs, if left unchecked, could become exploitable vulnerabilities. By combining specification security, implementation correctness, and low-level robustness, formal verification delivers a high level of assurance for cryptographic systems.

While powerful, formal verification is often compared to more traditional validation techniques like CAVP (Cryptographic Algorithm Validation Program) and TVLA (Test Vector Leakage Assessment). CAVP ensures functional correctness by running implementations through a series of fixed input-output tests, while TVLA assesses side-channel resistance via statistical analysis. These methods are practical and widely used in certification schemes but inherently limited. They can only validate correctness or leakage resistance across predefined scenarios, which means undiscovered vulnerabilities in untested scenarios may remain hidden.

Formal verification, by contrast, can prove the absence of entire classes of bugs across all input conditions. This level of rigor offers unmatched assurance but comes with trade-offs. It is resource-intensive, requiring specialized expertise, extensive computation, and significant time investment. Additionally, it is sensitive to the accuracy of the formal specifications themselves. If the specification fails to fully capture the intended security properties, then even a correctly verified implementation might still be vulnerable in practice.

Moreover, formal verification is constrained by the scope of what it models. For instance, if the specification doesn’t include side-channel models or hardware-specific concerns, those issues may go unaddressed. Tools used in formal verification can also contain bugs, which introduces the risk of false assurances. To address these issues, developers often employ cross-validation with multiple verification tools and complement formal verification with traditional testing, peer review, and transparency in the verification process.

Despite these limitations, formal verification is increasingly valued, especially in high-assurance sectors like aerospace, defense, and critical infrastructure. Although most certification bodies do not mandate formal verification—favoring test-driven approaches like those in the NIST and Common Criteria frameworks—its use is growing as a differentiator in ensuring cryptographic integrity. As cryptographic systems grow in complexity, particularly with the shift toward post-quantum algorithms, the industry is recognizing that traditional testing alone is no longer sufficient.

PQShield exemplifies this forward-looking approach. The company is actively investing in formal verification as part of its product development strategy. It participates in the Formosa project and contributes to formal proofs for post-quantum cryptographic standards like ML-KEM and ML-DSA. The company has verified its implementation of the Keccak SHA-3 permutation, as well as the polynomial arithmetic and decoding routines in its ML-KEM implementation. PQShield also contributes to the development of EasyCrypt, an open-source proof assistant used for reasoning about cryptographic protocols.

Looking ahead, PQShield plans to extend formal verification across more of its software and hardware offerings. This includes proving the correctness of high-speed hardware accelerators, particularly the arithmetic and sampling units used in PQC schemes. These efforts rely on a mix of internal and open-source tools and demonstrate the company’s commitment to secure-by-design principles.

In conclusion, formal verification offers critical advantages for cryptographic security, particularly as the industry transitions to post-quantum systems. It complements conventional testing methods by addressing their limitations and providing strong guarantees of correctness, robustness, and resistance to attack. While not yet universally mandated in certification schemes, formal verification is fast becoming a cornerstone of next-generation cryptographic assurance—and companies like PQShield are leading the way in putting it into practice.

You can download the paper here.

Also See:

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP285: The Post-Quantum Cryptography Threat and Why Now is the Time to Prepare with Michele Sartori

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge


Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY

Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY
by Daniel Nenni on 08-01-2025 at 10:00 am

MIPI Framework Mixel

The white paper “Exploring the Latest Innovations in MIPI D-PHY and MIPI C-PHY” details the latest developments in these two critical high-speed interface technologies, highlighting how they evolve to meet modern demands in camera and display systems across automotive, industrial, healthcare, and XR applications.

The evolution of MIPI D-PHY and MIPI C-PHY reflects the ongoing push toward higher performance and power-efficient data interfaces in camera and display systems. Originally developed for the mobile industry, both PHY types have significantly matured to support diverse applications in automotive, healthcare, industrial vision, and extended reality (XR). These advancements are essential to accommodate surging data rates driven by higher resolutions, expanded frame rates, and real-time image processing.

MIPI D-PHY, introduced in 2009, has incrementally increased its per-lane throughput from 2.5 Gbps to 11 Gbps over several specification versions. Key to supporting these higher rates are signal integrity enhancements such as transmitter de-emphasis and receiver Continuous Time Linear Equalization (CTLE), first introduced in v2.0. Version 3.5 added non-linear Decision Feedback Equalization (DFE) to further improve signal performance, especially in the 6–11 Gbps range. These techniques help mitigate channel losses across increasingly complex physical environments including PCB traces, packages, and connectors.

The power consumption challenges that arose as silicon geometries shrank were tackled by introducing new signaling modes. The original 1.2V LVCMOS signaling used for low-power control became problematic in modern nodes with lower core voltages. MIPI D-PHY responded by offering LVLP mode to lower the voltage swing to 0.95V, and ultimately developed the Alternate Low Power (ALP) mode. ALP mode discards the LP transmitter/receiver entirely, reusing the high-speed circuits for low-power signaling. This not only improves leakage characteristics and reduces IO loading but also enables the PHY to operate over longer channels, up to 4 meters.

The ALP signaling introduces the ALP-00 state, a collapsed differential mode where both wires are grounded, minimizing power during idle periods. Wake pulses and high-speed bursts are coordinated using embedded control signals, enhancing synchronization. Notably, ALP also supports fast lane turnaround, which significantly reduces latency in bidirectional interfaces compared to legacy LP-mode lane switching. Combined with spread spectrum clocking, first introduced in v2.0 to mitigate EMI, MIPI D-PHY’s power and emissions profile is increasingly well-suited for automotive and industrial-grade deployments.

In a major architectural shift, MIPI D-PHY v3.5 introduced Embedded Clock Mode (ECM). In ECM, clock information is no longer carried on a dedicated lane but embedded in the data stream using 128b/132b encoding with clock and data recovery (CDR). This allows the clock lane to be repurposed as a fifth data lane, increasing throughput by 25% in common configurations. ECM also reduces EMI by eliminating the always-on toggling clock line, and permits skew-insensitive timing between data lanes. However, the trade-off is reduced backward compatibility: ECM-only PHYs cannot interoperate with older Forwarded Clock Mode (FCM)-only devices.

MIPI C-PHY, launched in 2014, uses a 3-wire lane and a ternary signaling method to achieve efficient data encoding. The original 6-wirestate configuration encoded 16 bits in 7 symbols for an encoding efficiency of 2.28x. As symbol rates increased from 2.5 to 6 Gsps, data rates rose to 13.7 Gbps per lane. Equalization support was expanded in versions 1.2 and 2.0 through CTLE and various training sequences. Low power features were also introduced, including LVHS, LVLP, and ALP modes, often mirroring D-PHY enhancements while adapting them to C-PHY’s unique signaling format.

The landmark change came with C-PHY v3.0 and the 18-Wirestate mode. This innovation retains the same 3-wire lane interface but increases encoding efficiency to 3.55x by introducing 18 distinct differential states across wire pairs. With this, the PHY can achieve up to 24.84 Gbps per lane on short channels. New encoding schemes and state transitions were developed, with each symbol defined by a 5-bit code representing polarity, rotation, and flip attributes. The additional signaling levels require multi-level slicers in the receiver and increased TX power but enable significantly greater throughput.

The 18-Wirestate system also introduces a more sophisticated lane mapping and control mechanism. By embedding turnaround codes into the last transmitted symbol burst, C-PHY accelerates lane reversal, improving duplex performance. Furthermore, signal integrity is preserved through careful voltage slicing and receiver sensitivity enhancements, ensuring reliability despite reduced signal-to-noise ratio due to the multi-level signaling.

Together, the continued evolution of D-PHY and C-PHY demonstrates the MIPI Alliance’s focus on scalable, forward-compatible solutions that can bridge mobile, automotive, and emerging computing environments.

You can read the full whitepaper here.


Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell

Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell
by Daniel Nenni on 08-01-2025 at 10:00 am

Dan is joined by John O’Donnell, Founder and CEO of yieldHUB, a pioneering leader in advanced data analytics for the semiconductor industry. Since establishing the company in 2005 he has transformed it from a two-person startup into a trusted multinational partner that empowers some of the world’s leading semiconductor companies to improve yield, reduce test costs, boost engineering efficiency and enhance quality.

yieldHUB has recently celebrated its 20th anniversary. The company has also received national recognition for its accomplishments in Ireland. Dan explores yieldHUB’s history and future plans with John, including the company’s expansion worldwide and its new R&D focus areas. John describes the company’s new yieldHUB Live system, a test agnostic real-time capability with an AI recommendation system and digital twin models. John explains that AI is a new development focus for the company and this new system is having significant impact on improving yield, reducing test costs, and increasing product quality. John also describes a new API native platform that is in development.

Dan also explores the four pillars of yieldHUB with John, which are the previously mentioned improve yield, reduce test cost, boost engineering efficiency, and enhance quality. John describes the importance of each pillar and explains the approach yieldHUB takes to achieve these goals with its customers.

Contact yieldHUB here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities
by Daniel Nenni on 08-01-2025 at 8:00 am

sqr 1

Designing a system-on-chip (SoC) has never been more complex—or more critical. With accelerating demands across AI, automotive, and high-performance compute applications, today’s SoC architects face a series of high-stakes tradeoffs from the very beginning. Decisions made during the earliest phases of design—regarding architecture, IP selection, modeling, and system integration—can make or break a project’s success.

That’s why SemiWiki is proud to host a live webinar:

“What to Consider When Architecting Your Next SoC: Architectural Tradeoffs, IP Selection, and Ecosystem Realities”

 Thursday, August 14, 2025 | 9:00 AM PDT

This session will feature a practical, fast-paced conversation between two seasoned experts in SoC architecture and IP design:

  • Paul Martin, Global Director of SoC Architecture, Aion Silicon

  • Darren Jones, Distinguished Engineer & Solutions Architect, Andes Technology

Together, they’ll walk through real-world scenarios, decision frameworks, and lessons learned from working with some of the most demanding silicon customers in the world.

Rather than a static presentation, the format is designed as a fireside chat—highlighting the nuance and complexity of early-stage architecture decisions through dialog. Expect candid insights, live Q&A, and audience engagement—not a canned marketing pitch.

Register now to reserve your spot and be part of the conversation. 

What You’ll Learn:

  • How to weigh architectural tradeoffs when performance, flexibility, and schedule are in tension

  • What questions to ask when selecting IP across multiple vendors

  • The role of modeling, simulation, and emulation in derisking “works-first-time” silicon

  • How system-level decisions (like interconnect width or coherency models) impact overall architecture

  • Where ecosystem support—toolchains, deliverables, and foundry alignment—can determine downstream success

You’ll also gain a deeper understanding of performance, power, and area (PPA) metrics—how to interpret them, and how to avoid common traps when comparing IP blocks. This session goes beyond datasheets to explore how real design teams validate assumptions and make decisions that hold up under pressure.

Whether you’re leading architecture for your next chip, evaluating IP options, or supporting teams through SoC integration, this webinar will sharpen your perspective and provide actionable strategies.

Why Attend Live:

This is a live-only event, and attendees will have the chance to ask questions directly. If your team is facing architectural decisions this quarter—or simply wants to learn how top-tier firms approach system tradeoffs—this is a valuable opportunity to hear from peers in the trenches.

Register now to reserve your spot and be part of the conversation. 

Speaker Bios:

Darren Jones, Distinguished Engineer and Solutions Architect, Andes Technology

Darren Jones is a seasoned engineering leader with more than three decades of experience in processor architecture, SoC design, and IP integration. Currently a Distinguished Engineer and Solutions Architect at Andes Technology, he helps customers develop high-performance RISC-V–based solutions tailored to their systems, drawing on his deep expertise in system-on-chip design and verification.

Prior to Andes, Darren held senior leadership roles at Esperanto Technologies, Wave Computing, Xilinx, MIPS Technologies, and LSI Logic, where he led teams through multiple successful chip tapeouts—from 7nm inferencing accelerators to complex multi-core and multithreaded architectures. His experience spans architecture definition, RTL design, IP delivery, and full-chip integration.

Darren holds more than 25 patents in processor design and multithreading. He earned his M.S. in electrical engineering from Stanford University and his B.S. with highest honors from the University of Illinois Urbana-Champaign.

Paul Martin, Global Director of SoC Architecture, Aion Silicon

Paul Martin is the Global Director of SoC Architecture at Aion Silicon, where he leads international engineering teams and drives customer engagement across complex semiconductor design projects. With decades of experience in commercial, technical, and strategic roles at companies including ARM and NXP, he has helped bring cutting-edge SoC technologies to market. Martin is known for his ability to bridge technical innovation with business value across Europe, North America, and Asia.

Register now to reserve your spot and be part of the conversation. 

Also Read:

The Sondrel transformation to Aion SIlicon!

2025 Outlook with Oliver Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel