SNPS1670747138 DAC 2025 800x100px HRes

Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight

Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight
by Daniel Nenni on 04-11-2025 at 10:00 am

Dan is joined by Matthew Ozalas, a distinguished RF engineer at Keysight Technologies. With extensive experience in RF and microwave engineering, Matthew has made significant contributions to the field, particularly in the design and development of RF power amplifiers. His expertise spans hardware and software applications as well as design and automation.

In this insightful discussion, Dan explores the realm of RF/high frequency design with Mathew, who describes some of the unique requirements of this class of design. The impact of AI is explored as well. Matthew explains how the massive data available needs to be harnessed. The methods he describes are different from mainstream digital design. The strategies to build new, AI-assisted design flows are also explored. Matthew describes the importance of using the same analysis technologies across all phases of the design, from chip to package to system. He also describes the work going on at Keysight to enable novel, new AI-assisted design flows for high frequency design and what the future will look like.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape

Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape
by Daniel Nenni on 04-11-2025 at 6:00 am

fallback image

In today’s rapidly evolving digital landscape, the security of electronic systems is of the highest priority. This importance is underscored by technological advancements and increasing regulatory demands. Multi-die designs which integrate multiple dies (also called chiplets) into a single package, introduce complexity and potential vulnerabilities. These vulnerabilities stem from the varied functions of chiplets and a fragmented supply chain. Addressing these challenges requires a comprehensive approach that encompasses robust security measures at every level of the design and manufacturing process. Attend this Synopsys webinar to learn more about security for multi-die designs.

Regulation and Standardization in Security

Regulations and standards play a crucial role in addressing the security challenges associated with advanced electronic systems, including multi-die designs. Standards organizations define security levels, procedures, and certification tests to ensure chiplet conformance. The security requirements they define must cover individual chiplets, their interconnects, and the overall system, providing a holistic approach to mitigate risks associated with increased complexity.

Additionally, emerging regulations like the Cyber Resilience Act in Europe and the ISO/SAE 21434 standard for automotive cybersecurity are shaping the security landscape. These regulations emphasize the need to design security into systems from the ground up. The Cyber Resilience Act sets clear security requirements for digital products and services throughout their lifecycle, while ISO/SAE 21434 provides a framework for managing cybersecurity risks in the automotive industry, ensuring supply chain protection.

Together, regulations and standards highlight the importance of a proactive security approach. By adhering to these guidelines, organizations can mitigate risks and safeguard their products against emerging threats, ensuring robust security for advanced electronic systems.

Quantum Computing Threats and Post-Quantum Cryptography

Another important driver for progress is the imminent threat posed by quantum computing, which will break current public-key cryptographic algorithms like RSA and ECC. This makes the development of post-quantum cryptography essential. This field focuses on creating algorithms that are resistant to quantum attacks, ensuring long-term security for electronic systems.

Post-quantum cryptography aims to develop algorithms secure against both classical and quantum threats. The first post-quantum cryptographic algorithms have now been standardized by NIST, marking a significant milestone in protecting data as quantum technology advances. Implementing these new algorithms is crucial for maintaining the security of electronic systems against quantum computing threats, ensuring long-term data protection.

Other Advanced Security Solutions

To further facilitate a secure-by-design approach, a range of advanced security solutions can be leveraged. These include Physical Unclonable Functions (PUFs), embedded hardware secure modules, Secure Boot mechanisms, and Secure Interface solutions. Each of these technologies plays a critical role in fortifying multi-die designs against current and future threats.

  • PUFs provide a unique and unclonable identity to each chiplet, making it difficult for attackers to replicate or tamper with the hardware.
  • Embedded hardware secure modules, such as Synopsys’ tRoot, provide a trusted execution environment that can securely manage cryptographic operations and sensitive data.
  • Secure Boot mechanisms ensure that only authenticated and authorized firmware and software are executed on the device, preventing malicious code from being loaded.
  • Secure Interface solutions protect data in transit between chiplets and to other system components, ensuring that communications remain confidential and tamper-proof.
Conclusion

Navigating the complex security landscape of multi-die designs requires a comprehensive and proactive approach. By understanding the importance of security and the drivers behind it, and by leveraging advanced security solutions, it is possible to build robust and secure electronic systems. Standards organizations, emerging regulations, and the advent of quantum computing all play a role in shaping the security domain. By designing security into systems from the ground up and addressing key considerations at every level, we can ensure the protection of our systems and data against current and future threats.

Want to stay ahead of security challenges in multi-die designs? Register for our webinar, How to Approach Security for Multi-Die Designs,” and learn the essential techniques you will need for your next project.

Also Read:

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

DVCon 2025: AI and the Future of Verification Take Center Stage


Generative AI Comes to High-Level Design

Generative AI Comes to High-Level Design
by Daniel Payne on 04-10-2025 at 10:00 am

high level agents min

I’ve watched the EDA industry change the level of design abstraction starting from transistor-level to gate-level, then RTL, and finally using High Level Synthesis (HLS). Another emerging software trend is the use of generative AI to make coding RTL more automated. There’s a new EDA company called Rise Design Automation that enables design and verification beyond RTL, so I attended their recent webinar to learn more about what they have to offer.

Ellie Burns started out with an overview of the semiconductor market and how trends like AI/ML, 5G, IoT and hardware accelerators are driving then landscape. RTL design techniques and IP reuse have done OK, but there is an insatiable demand for new designs and larger designs amidst a general engineering shortage.

What Rise offers is a new, three-prong approach to meet the market challenges:

Their generative AI is an assistant that automatically creates SystemVerilog, C++ or SystemC code based on your prompts. An agent also runs high-level synthesis and verification, saving time and effort. There’s even an agent for iterative performance tuning and optimization tasks.

This high-level code is synthesized into RTL, and using high-level verification can be up to 1,000X faster than RTL verification. Using this flow enables debug, analysis and exploration more quicky and thoroughly than an RTL approach. System architects get early, executable models to explore architectures. RTL designers can re-use system models, and reach PPA optimization faster. Verification engineers start verifying much earlier and benefit from auto-generated adaptors/transactors. Even the software engineers use the virtual platform for early access to accurate hardware behavior, and their model is always in sync with the hardware.

Mike Fingeroff, Chief of HLS at Rise was up next, showing how the high-level agents work with human-in-the-loo using existing pre-trained LLM’s plus a specialized knowledge base. The pre-trained LLM’s eliminate the need for any sensitive RTL training data. HLS converts SystemVerilog, C++ or SystemC into synthesizable RTL, inserts pipeline registers to meet timing, adds dataflow control, enables exploration and even infers memories.

 

Their HLS synthesis creates RTL, uses constraints for power, performance and area while optimizing with a technology aware library. Here’s the architecture that Rise has:

Alan Klinck, Co-founder of Rise DA, talked about agent-based generative AI for hardware design in three parts:

  • Rise Advisor – prompts used for tasks and questions, expert assistance, accelerate code development
  • Rise HLS Agent
  • Rise Optimization Agent

For an example case he showed us hls4ml, a Python package for machine learning inference in FPGAs.

A live prompt typed was, “How do I make my HLS design synthesizable?”

The Rise system is modular, so you can even use your own language models. Their knowledge base plus language models reduces hallucinations, improving the quality of results. The language models can run on-premises or in the cloud, your choice, and there is no training going on with your tool usage. Using a consumer GPU like the NVIDIA 4090 is sufficiently powerful to run their tools. You go from HLS to RTL typically in seconds to minutes, so it’s very fast.

For the live demo they used the Visual Studio Code  tool on a loosely timed design with some predefined prompts, and as they asked questions and made prompts we saw newly generated code, ready for HLS. Trade-offs between multiple implementations was quickly generated for comparisons to find the optimal architecture.

Summary

I was impressed with an EDA vendor running their tool flow live during a webinar, instead of pre-canned screenshots. If you’ve considered improving your familiar RTL-based design flow with something newer and more powerful, then it’s likely time to look at what Rise DA has to offer. The engineering team has decades of experience in EDA and HLS, plus they’ve added generative AI to the flow that now benefits both design and verification engineers.

View the replay of their webinar online.

Related Blogs


Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next

Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next
by Admin on 04-10-2025 at 6:00 am

image chipflow keysom

by, Tomi Rantakari CEO ChipFlow & Luca Testa COO Keysom

The automotive industry is undergoing a major transformation, driven by electrification, the rise of new market players, and the rapid adoption of emerging technologies such as AI. Among the most significant advancements is the growing adoption of RISC-V, an open-standard instruction set architecture (ISA) that is reshaping how automotive chips are designed and deployed. Companies like Keysom (keysom.io) and Chipflow (chipflow.io) are playing a crucial role in this shift, helping to navigate the opportunities and challenges of RISC-V in the automotive sector.

Progress in RISC-V Adoption

RISC-V offers a compelling alternative to proprietary architectures, providing flexibility, cost efficiency, and a growing ecosystem of development tools. In automotive applications, its modularity allows manufacturers to tailor processors for specific workloads, optimizing performance and power efficiency. With the increasing complexity of software-defined vehicles and real-time processing requirements, RISC-V’s open ecosystem fosters innovation by enabling chip customization at all levels.

Chipflow and Keysom have been actively contributing to this progress, focusing on design automation, tooling, and reference architectures and IPs that accelerate the adoption of RISC-V-based solutions in automotive systems. Both companies help reduce barriers to entry, ensuring a smoother transition for automotive players looking to integrate RISC-V into their next-generation platforms.

Challenges in Automotive Integration

Despite its advantages, RISC-V faces several challenges in automotive applications. The industry’s strict safety and reliability requirements necessitate extensive validation and compliance with standards such as ISO26262.

Additionally, transitioning from well-established proprietary architecture requires significant investment in software development, ecosystem support, and integration with existing automotive workflows.

Keysom and Chipflow recognize these challenges and are working to address them through industry collaborations, improved IP customization, toolchains, and enhanced verification methodologies. Developing a robust software ecosystem, and functional safety-certified toolchains, remains a top priority to ensure that RISC-V can meet the demands of mission-critical automotive applications.

The Role of the European Chips Act

One of the key drivers of RISC-V adoption in Europe is the European Chips Act. This initiative aims to enhance Europe’s semiconductor capabilities, ensuring supply chain resilience and fostering innovation in critical technologies, including open-source processor architectures like RISC-V.

Under the “Chips for Europe Initiative,” significant investments are being made to support research, pilot production, and the development of next-generation semiconductor technologies. This initiative aligns with the vision of Keysom and Chipflow, both of which are committed to advancing open hardware solutions that provide greater flexibility and sovereignty in automotive chip design. By leveraging the funding and infrastructure provided by the Chips Act, both companies are strengthening their efforts to build scalable, high-performance RISC-V solutions tailored for automotive applications.

What’s Next for RISC-V in Automotive?

The future of RISC-V in the automotive industry looks promising, with increasing investments, growing industry adoption, and expanding ecosystem support. As industry moves toward software-defined vehicles, the demand for flexible, customizable hardware will continue to rise.

Keysom and Chipflow remain at the forefront of this evolution, driving advancements in RISC-V-based automotive processors. By focusing on design automation, open-source collaboration, and compliance with industry standards, both companies are helping pave the way for a more open and innovative automotive semiconductor landscape.

As RISC-V matures and gains traction in safety-critical applications, collaboration and ecosystem growth will be essential. The path toward a more open, customizable, and efficient automotive hardware future is becoming increasingly clear. The next few years will be crucial in shaping the role of RISC-V in vehicles, and Keysom and Chipflow are committed to contribute to this transformation.

Also Read:

CEO Interview with Cyril Sagonero of Keysom


Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation
by Kalar Rajendiran on 04-09-2025 at 10:00 am

Sassine Keynote (with Satya)

The annual SNUG (Synopsys Users Group) conference, now in its 35th year, once again brought together key stakeholders to showcase accomplishments, discuss challenges, and explore opportunities within the semiconductor and electronics industry. With approximately 2,500 attendees, SNUG 2025 served as a dynamic hub for collaboration and knowledge exchange.

This year, the conference was co-located with the inaugural Synopsys Executive Forum, an exclusive event featuring industry-impacting discussions attended by around 350 executives from Synopsys’ customer base, along with members of the media, analyst community, and investors. The forum facilitated in-depth discussions on the evolving silicon and systems landscape, the role of AI in the future of computing, and strategies to address key industry challenges.

The following summarizes the key insights gathered from various sessions at the event.

Synopsys CEO Sassine Ghazi’s Keynote: Re-Engineering Engineering

As semiconductor designs grow increasingly complex, traditional verification methods are proving insufficient. With quadrillions of cycles requiring validation, new verification paradigms have become critical. The cost and time associated with verification are too significant to rely on “good enough” approaches, making accurate and thorough validation indispensable.

The integration of artificial intelligence (AI) is fundamentally reshaping electronic design automation (EDA) workflows, extending beyond mere efficiency improvements. As chip designs grow in complexity, new methodologies are necessary, and AI-driven workflows offer substantial productivity gains. AI adoption in EDA is evolving from assistance to full automation. Initially, AI acted as a copilot, aiding engineers, but it is now advancing toward specialized AI agents, multi-agent orchestration, and, eventually, autonomous design systems. Just as autonomous driving systems evolved in stages, AI in engineering must undergo a similar maturation process—gradually reducing human intervention while increasing efficiency.

Collaboration among industry leaders is crucial to advancing semiconductor innovation. Partnerships with companies such as NVIDIA, Microsoft, and Arm are essential for driving AI adoption in semiconductor design. No single company can tackle the growing complexity of semiconductor engineering alone, making collaboration key to accelerating innovation and developing solutions that keep pace with the industry’s rapid evolution.

Looking ahead, AI-driven “agent engineers” will work alongside human engineers—initially assisting them, but eventually making autonomous decisions in chip design. This shift will redefine the engineering process, unlocking new levels of efficiency and innovation while enabling more advanced and agile semiconductor development.

The impact of semiconductor advancements extends far beyond the industry itself, profoundly influencing how people live, work, and interact with technology—shaping the future of society. Synopsys’ mission is to empower innovators to drive human advancement.

A special highlight of the keynote session was the virtual guest appearance of Satya Nadella, CEO and Chairman of Microsoft. Nadella emphasized several AI milestones from a software perspective: the evolution from code completion to chat-based interactions, and now the emergence of Agentic AI, where AI agents autonomously execute high-level human instructions. This development has profound business implications, as discussed during the Fireside Chat session.

Fireside Chat: Sustainable Computing – Silicon to Systems

Synopsys CEO Sassine Ghazi and Arm CEO Rene Haas engaged in a thought-provoking discussion on sustainable computing and its implications for the future.

Emerging AI business models are beginning to disrupt traditional platforms and industries. AI-driven agents have the potential to automate user interactions, posing challenges for businesses and platforms that rely on conventional engagement and ad-based revenue models. As AI bypasses traditional interaction strategies, entire sectors could be reshaped. Companies that swiftly adapt to new AI technologies and business models will position themselves as industry leaders.

The Stargate initiative, a collaboration between OpenAI, SoftBank, Oracle, and Microsoft, aims to build massive data centers across the U.S. This effort presents challenges in energy consumption, thermodynamics, and infrastructure management. Large-scale AI projects will require innovative energy-efficient solutions, integrating renewable energy with cutting-edge computing infrastructure to meet their immense power demands. As AI workloads extend to edge devices, the development of low-power, high-performance chips will be critical to making AI viable in mobile and embedded systems.

In healthcare, AI-driven drug discovery has the potential to significantly accelerate the drug approval process—particularly in areas like cancer research—by improving success rates and eliminating costly early-stage trials.

Panel Discussion: Silicon to Systems Innovations for AI

A distinguished panel of experts from Astera Labs, TSMC, Broadcom, Arm, and Meta explored key innovations in AI-driven semiconductor development.

The discussion highlighted how AI is transforming chip design decision-making through advanced reasoning models that optimize design trade-offs and system performance. Panelists also examined strategies to reduce AI model training time, enabling faster deployment and iteration cycles.

Ensuring reliable connectivity in AI-powered devices was another key focus, as seamless data transmission is critical for AI-driven systems. Panelists emphasized the importance of designing products that align with customers’ development schedules while maintaining technological advancements.

Finally, they underscored the significance of strong IP partnerships, which foster collaboration and accelerate innovation across the semiconductor industry.

Panel Discussion: Re-Engineering Automotive Engineering

The complexities of integrating AI into automotive engineering were a central focus of this discussion, featuring experts from Ansys, Mercedes-Benz R&D North America, and Synopsys.

A key challenge in this domain is achieving seamless multi-domain interactions, where hardware, software, and AI technologies must function cohesively. Another critical factor is the high cost and complexity of data collection for advanced driver-assistance systems (ADAS), which is essential for training and validating AI-driven automotive solutions.

Mercedes-Benz R&D North America shared its approach to edge processing, where data is processed locally in vehicles rather than being stored or transmitted to the cloud—reducing latency and improving efficiency. The discussion also explored the role of synthetic data in training AI models. While synthetic data is widely used for validation, panelists noted that its effectiveness in training AI models remains limited. Additionally, electronic control units (ECUs) continue to evolve, playing an increasingly vital role in advancing autonomous driving capabilities.

The panel also addressed the rapid pace of product development in the automotive industry, particularly in China, where the minimum viable product (MVP) approach enables fast iterations but raises concerns about long-term reliability.

Panel Discussion: Custom High Bandwidth Memory (HBM) for AI

A panel featuring experts from AWS, SK hynix, Samsung, Marvell, and Synopsys explored this topic.

High Bandwidth Memory (HBM) has been a cornerstone of AI advancement, enabling high-speed data access with low latency, which is critical for AI workloads. In this discussion, panelists examined the role of HBM in AI and the ongoing debate between traditional HBM and custom HBM (cHBM) solutions.

While HBM offers benefits such as reducing I/O space by at least 25%, it comes at the cost of significantly higher power consumption. This tradeoff presents a challenge in balancing efficiency and performance.

Customization in memory solutions also poses scalability challenges. Memory vendors struggle with standardized production when customization is required, potentially hindering mass adoption. The lack of an agreed-upon architectural standard for custom HBM further complicates widespread deployment. However, future iterations of UCIe (Universal Chiplet Interconnect Express) are expected to bridge the gap between custom and traditional HBM solutions, promoting greater interoperability and scalability in AI hardware.

Panel Discussion: The Productivity Promise – AI Apps from RL to LLMs to Agents

This panel moderated by industry journalist Junko Yoshida and featuring experts from Nvidia, Microsoft, AMD, Intel, and Synopsys explored the broader implications of AI beyond chip design.

A key transformation in the industry is the shift from AI as a tool to AI as an autonomous problem solver, marking the emergence of Agentic workflows. In this new paradigm, AI no longer simply assists with tasks but takes on a proactive, autonomous role in solving complex problems.

As AI evolves, its impact on education becomes increasingly significant. Preparing the next generation of semiconductor engineers to collaborate with AI-driven design tools is essential, ensuring they can leverage these technologies in innovative ways.

AI’s learning capabilities are another major asset, enabling it to uncover insights and make connections that humans may not have initially considered—enhancing decision-making and innovation.

Additionally, the use of generator-checker models exemplifies a powerful approach in AI systems. In this model, AI agents act as both creators and validators, working in tandem to ensure reliability and accuracy, thereby advancing the trustworthiness and effectiveness of AI applications.

Panel Discussion: Realizing the Promise of Quantum Computing

A panel featuring experts from IBM Quantum, the University of Waterloo, Intel, Qolab, and Synopsys explored how quantum computing intersects with AI.

Quantum computing is still in its early stages, with significant progress made, but practical, large-scale systems may take decades to develop. While quantum mechanics is a broad phenomenon, its applications extend beyond quantum computers to various technological systems.

The relationship between quantum computing and AI is bidirectional, where Quantum for AI and AI for quantum both play vital roles in advancing each other. On one hand, quantum computing has the potential to significantly enhance AI by accelerating computations, enabling faster and more efficient model training. On the other hand, AI is being leveraged to optimize quantum systems, improving algorithms and error correction methods. However, building stable qubits remains a major challenge. Maintaining quantum states for computation is notoriously difficult due to their inherent sensitivity to external factors such as temperature and electromagnetic interference. Despite these challenges, the promise of quantum computing lies in its ability to handle complex problems beyond the capacity of classical computers.

To unlock high-value business applications, millions of qubits will be necessary, driving continued research and development to make quantum computing a practical and transformative tool for industries like AI, pharmaceuticals, and cryptography. A hybrid approach combining classical and quantum systems is necessary for practical quantum computing, leveraging existing infrastructure to control qubits. Effective system-level design, including hardware, software, and control systems, is crucial for making quantum systems functional.

Scalability remains a major challenge, requiring millions of qubits and error correction to achieve practical quantum computing, much like the scaling challenges faced by traditional semiconductors. The quality of qubits and the need for error correction are critical, as reliable qubits are essential for minimizing errors.

Manufacturability is key to scaling quantum systems, and the expertise of the semiconductor industry will help accelerate progress. Long-term investment and collaboration between the private and public sectors, as well as academic institutions and companies, were discussed as vital for advancing the field. Although quantum computing holds immense potential, practical applications will take time to develop, with ongoing experimentation driving innovation.

Revolutionary Results in Evolutionary Timescales

Toward the end of the Synopsys Executive Forum, I had an informal conversation with Aart de Geus, where he shared his insights on various topics, including sustainable power, renewable energy, CO2 emissions, his work with NGOs, and the role of nuclear power plants in future energy grids. This chat, which took place between sessions and was not part of the formal agenda, provided valuable perspectives on the broader implications of technological advancements and the rapid adoption of new technologies in the semiconductor industry.

Aart summed up the mission that drives both Synopsys and the entire semiconductor ecosystem, remarking that in the face of these critical challenges, the semiconductor industry is delivering “revolutionary results in evolutionary timescales.”

Summary

The Synopsys Executive Forum provided a comprehensive look at the future of AI, semiconductor engineering, and computing. The event sessions were thoughtful, organized, and productive, making the forum a top-notch experience. As the industry moves toward more autonomous AI systems, re-engineered automotive solutions, and scalable quantum computing, collaboration across the ecosystem will be critical in driving sustainable, high-impact innovation. Looking forward to more such events.

Also Read:

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

DVCon 2025: AI and the Future of Verification Take Center Stage

Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity


Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON

Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON
by Daniel Nenni on 04-09-2025 at 6:00 am

Dan is joined by Marc Evans, director of business development and technology at Andes. Marc has over twenty years of experience in the use of CPU, DSP, and Specialized IP in SoCs from his prior positions at Lattice Semiconductor, Ceva, and Tensilica. During his early career, Marc was a processor architect, making significant contributions to multiple microprocessors and systems at Tensilica, HP, Rambus, Rise Technology, and Amdahl.

Dan discusses the significant focus of Andes on commercialization of RISC-V with Marc. He also discusses the upcoming Andes RISC-V CON. Marc explains that Andes does this conference at a time that doesn’t conflict with key events done by RISC-V International. The idea is to provide a broad coverage of the RISC-V movement throughout the year. Marc provides an overview of the event, which includes engineering-level presentations along with partner contributions, strategic presentations and a new Developer’s Track, which provides a deep dive into key areas. There are also fireside chats, high-profile talks from industry luminaries and the opportunity to network with conference attendees.

The event will be held at the DoubleTree Hotel in San Jose on April 29, 2025. You can learn more about the conference and register to attend for free here. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Pierre Laboisse of Aledia
by Daniel Nenni on 04-08-2025 at 10:00 am

Pierre Aledia Headshot

With over 25 years of international experience in the high-tech sector, Pierre Laboisse now leads Aledia with strategic expertise. Before Aledia, he made significant contributions at Infineon, NXP, and ams OSRAM. Having served on the boards of KeyLemon and 7 Sensing Software, he demonstrates solid expertise in corporate strategy and execution. Pierre is renowned for his results-driven leadership and commitment to innovation in the microLED sector.

Pierre is leading Aledia with a vision for scaling its innovative technology to transform the global display market. Pierre brings extensive leadership experience in high-tech industries and a proven track record of driving business growth.

Tell us about your company:

Aledia is a French deep-tech technology company specialized in advanced display solutions. Originating from CEA’s research labs in 2011, it focuses on developing unique microLED technology for next-generation displays, including applications in consumer products, automotive and augmented reality. Aledia’s proprietary technology uses a new class of 3D nanowire LEDs grown on silicon, offering significant advantages in brightness, energy efficiency, and cost compared to traditional 2D microLED technologies.

What problems are you solving?

AR and consumer products simultaneously require exceptionally high brightness, energy-efficient, compact and cost-effective displays. No technology in mass production today can meet all these criteria at the same time. 3D microLED is the only viable solution to fill the requirements of this future generation of displays.

For AR in particular, tech giants are accelerating efforts in microLED technology for smart glasses, aiming for commercial launches by 2027, but hardware challenges like power consumption, bulkiness, and manufacturing costs still hinder mass adoption. After 12 years of R&D, nearly 300 patents, and $600 million in investment, Aledia has overcome the toughest hardware challenges, paving the way for the most immersive, AI-powered AR vision experiences ever conceived.

We address these needs through a proprietary process that grows GaN nanowires on standard silicon wafers, enabling 3D microLEDs with high performance and potential for lower-cost production. This combination of performance and scalability makes it more efficient and cost-effective for accelerating next-generation displays into everyday devices.

What application areas are your strongest?

We’re particularly focused on augmented reality, where top-tier display performance in a very small, power-efficient footprint is a must. Our nanowire-based microLEDs are designed to deliver high brightness and efficiency, even in challenging lighting conditions, while fitting into compact form factors.

Additional applications include consumer products (smartwatches and smartphones), automotive (dashboards and head up displays) and TVs. In fact, by having the world’s smallest, most efficient LED on 8 inches of silicon wafer, we have a technology that can lower microLED production costs. This is a very important issue for these applications, where price competition with OLED and LCD technologies is very strong.

What keeps your customers up at night?

To achieve mass adoption of augmented reality and other truly immersive experiences, our customers need displays that can handle bright outdoor settings, operate efficiently on limited power, and fit into lightweight, user-friendly designs. They want solutions that move beyond conceptual demonstrations and deliver meaningful, everyday utility. We’re honing our microLED approach so that when these products hit the market, they deliver the kind of seamless, real-world experience users genuinely value.

What does the competitive landscape look like and how do you differentiate?

There are currently players in 2D microLED technology, but these are still very expensive and not suitable for mass production. Aledia’s advantage lies in its over $200 million in-house pilot production line at the center of Europe’s “Display Valley,” enabling faster iteration without initial volume constraint. By utilizing semiconductor-grade silicon in 8-inch and 12-inch formats, Aledia lowers production costs for large-scale production of microLEDs, accelerating widespread adoption in a wide range of displays.

What new features/technology are you working on?

After 12 years of relentless R&D, a portfolio of nearly 300 patents and $600 million in investment, we’re now focused on bringing our microLEDs to market to serve customers at the cutting edge of innovation in AR, automotive, wearables and electronics. Aledia is ready and able to support customer demand ramp up to nearly 5,000 wafer starts per week.

How do customers normally engage with your company?

Customers typically collaborate with us early in the design process to integrate our microLEDs into their next-generation devices. Our in-house pilot production line allows us to accelerate development and adapt to their needs quickly. This hands-on approach ensures we’re solving the right problems and delivering impactful solutions.

Contact Aledia

Also Read:

CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Dr Greg Law of Undo


Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery

Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery
by Fred Chen on 04-08-2025 at 6:00 am

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

In a DRAM chip, the memory array contains features which are the most densely packed, but at least they are regularly arranged. Outside the array, the regularity is lost, but in the most difficult cases, the pitches can still be comparable with those within the array, though generally larger. Such features include the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits.

A key challenge is that these lines are meandering in appearance, and the pitch is varying over a range; the local max/min pitch ratio can range from ~1.4 to 2. These pitches have different focus windows [1], and in EUV lithography, these windows may be separated by more than the resist thickness [2].

Pitch uniformity within a single exposure can be attained if the layout is split accordingly for double patterning with stitching [3,4] (Figure 1). The layout is dissected into stripes of alternating color, each color assigned to one of two exposures. Features may cross stripe boundaries; in that case, the two exposures need to stitch correctly at the boundaries.

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

Alternatively, some features like diagonals may be forbidden to be stitched, resulting in a different layout split (Figure 2).

Figure 2. Alternatively splitting a metal layout for stitched double patterning for better pitch uniformity, avoiding stitching of diagonal features.

For minimum pitches above 40 nm, we expect double patterning to be sufficient with ArF immersion lithography. In case the minimum pitch is lower than this, triple patterning may be used (Figure 3) with ArF immersion lithography, as an alternative to EUV double patterning.

Figure 3. Parsing the layout of Figure 1 for triple patterning including stitching.

Previously, quadruple patterning was suggested for where the minimum line pitch is less than 40 nm [1], but it turns out triple patterning may suffice (Figure 4).

Figure 4. A quadruple patterning arrangement [1] (left) can be rearranged to support triple patterning with stitching (center) or possibly even avoid stitching (right).
In some special cases, a multiple spacer approach may be able to produce the periphery metal pattern with islands and bends with only one mask exposure [5]. However, the stitched double patterning has been the default choice for a very long time [3,4]; it should be expected to kept that way for as long as possible, even through the mid-teen nm DRAM nodes [6].

References

[1] F. Chen, Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM.

[2] A. Erdmann et al., J. Micro/Nanolith. MEMS MOEMS 15, 021205 (2016); E. van Setten et al., 2012 International Symposium on EUV Lithography.

[3] Y. Kohira et al., Proc. SPIE 9053, 90530T (2014).

[4] S-Min Kim et al., Proc. SPIE 6520, 65200H (2007).

[5] F. Chen, Triple Spacer Patterning for DRAM Periphery Metal.

[6] C-M. Lim, Proc. SPIE 11854, 118540W (2021).

Thanks for reading Multiple Patterns! Subscribe for free to receive new posts and support my work. Pledge your support.

Also Read:

A Perfect Storm for EUV Lithography

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution


Alphawave Semi is in Play!

Alphawave Semi is in Play!
by Daniel Nenni on 04-07-2025 at 10:00 am

Awave Social Post Image

We started working with Alphawave at the end of 2020 with a CEO Interview. I had met Tony Pialis before and found him to be a brilliant and charismatic leader so I knew it would be a great collaboration. Tony was already an IP legend after his company was acquired by Intel. After 4+ years at Intel Tony co-founded Alphawave in 2017. Today, Alphawave Semi is a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure. They are a publicly on the London Stock Exchange (LSE) with a recent stock price spike for obvious reasons. I have nothing but great things to say about Alphawave, absolutely.

When I first read about the exclusive OEM deal between Siemens and Alphawave I immediately thought: Why was this not an acquisition?

Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership

“Siemens Digital Industries Software is a key and trusted partner for AI and hyperscaler developers, and our agreement simplifies and speeds the process of developing SoCs for these, and other leading-edge technologies, to incorporate Alphawave Semi’s IP,” said Tony Pialis, president and CEO, Alphawave Semi. “Our technologies play a critical role in reducing interconnect bottlenecks and this collaboration greatly expands our customer reach, allowing more companies to deliver next-level data processing.”

Ever since Siemens acquired Mentor Graphics in 2017 for $4.5B they have implemented an aggressive acquisition strategy acquiring dozens of companies. We track them on the EDA Merger and Acquisition Wiki. During the day I help emerging companies with exits so I have M&A experience with the big EDA companies including Siemens. They all do it differently but I can tell you the Siemens M&A team is VERY good and they do not take being out bid lightly. I was with Solido Design and Fractal when they were acquired by Siemens and I was with Tanner EDA and Berkely Design when they were acquired by Mentor Graphics. It was a night versus day difference for the M&A processes.

The only answer I came up with as to why it was on OEM agreement versus an outright acquisition was price, Tony wanted more money. Purely speculation on my part but now that I have read that both Qualcomm and Arm might be interested in acquiring Alphawave it is more than speculation.

Qualcomm considers buying UK semiconductor firm Alphawave

Exclusive-Arm recently sought to acquire Alphawave for AI chip tech, sources say

Given that Alphawave has a lot of Intel experience inside and Lip-Bu Tan knows the importance of IP and ASIC design, maybe Intel will make an offer as well?

To be clear, Alphawave is not just an IP licensing company, they also do Chiplets and custom ASICs. In 2022 they acquired SiFive’s ASIC business for $210M. This was the former Open-Silicon that SiFive acquired in 2018 for an undisclosed amount. I worked with both Open-Silicon and eSilicon at the time. The ASIC business is a whole lot easier if you have the best IP money can buy and Alphawave does.

I remember trying to talk Mentor into entering the IP business 15 years ago but they would not budge, which was clearly a mistake. Synopsys and Cadence leverage their IP for EDA business and that puts Siemens EDA at a distinct disadvantage, thus the OEM agreement with Alphawave. IP is a big driver in the semiconductor ecosystem, just ask TSMC.

Also, Broadcom has an ASIC business, Marvell has an ASIC business, MediaTek has an ASIC business, Qualcomm does not have an ASIC business, Arm does not have an ASIC business, there is more than IP in play here.

I would bet that Siemens has some kind of Right of First Refusal tied to the OEM agreement and Tony getting offers from Qualcomm and Arm will pull that trigger. I really do not see Siemens having much of a choice but to pay a premium for Alphawave.

Exciting times in the semiconductor industry!

Also Read:

Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano

Scaling AI Data Centers: The Role of Chiplets and Connectivity

How AI is Redefining Data Center Infrastructure: Key Innovations for the Future


Even HBM Isn’t Fast Enough All the Time

Even HBM Isn’t Fast Enough All the Time
by Jonah McLeod on 04-07-2025 at 6:00 am

BW V Latency

Why Latency-Tolerant Architectures Matter in the Age of AI Supercomputing

High Bandwidth Memory (HBM) has become the defining enabler of modern AI accelerators. From NVIDIA’s GB200 Ultra to AMD’s MI400, every new AI chip boasts faster and larger stacks of HBM, pushing memory bandwidth into the terabytes-per-second range. But beneath the impressive specs lies a less obvious truth: even HBM isn’t fast enough all the time. And for AI hardware designers, that insight could be the key to unlocking real performance.

The Hidden Bottleneck: Latency vs Bandwidth

HBM solves one side of the memory problem—bandwidth. It enables thousands of parallel cores to retrieve data from memory without overwhelming traditional buses. However, bandwidth is not the same as latency.

Even with terabytes per second of bandwidth available, individual memory transactions can still suffer from delays. A single miss in a load queue might cost dozens of clock cycles. The irregular access patterns typical of attention layers or sparse matrix operations often disrupt predictive mechanisms like prefetching. In many systems, memory is shared across multiple compute tiles or chiplets, introducing coordination and queuing delays that HBM can’t eliminate. And despite the vertically stacked nature of HBM, DRAM row conflicts and scheduling contention still occur.

In aggregate, these latency events create performance cliffs. While the memory system may be technically fast, it’s not always fast enough in the precise moment a compute engine needs data—leading to idle cycles in the very units that make these chips valuable.

Vector Cores Don’t Like to Wait

AI processors, particularly those optimized for vector and matrix computation, are deeply dependent on synchronized data flow. When a delay occurs—whether due to memory access, register unavailability, or data hazards—entire vector lanes can stall. A brief delay in data arrival can halt hundreds or even thousands of operations in flight.

This reality turns latency into a silent killer of performance. While increasing HBM bandwidth can help, it’s not sufficient. What today’s architectures truly need is a way to tolerate latency—not merely race ahead of it.

The Case for Latency-Tolerant Microarchitecture

Simplex Micro, a patent-rich startup based in Austin, has taken on this challenge head-on. Its suite of granted patents focuses on latency-aware instruction scheduling and pipeline recovery, offering mechanisms to keep compute engines productive even when data delivery lags.

Among their innovations is a time-aware register scoreboard, which tracks expected load latencies and schedules operations accordingly avoiding data hazards before they occur. Another key invention enables zero-overhead instruction replay, allowing instructions delayed by memory access to reissue cleanly and resume without pipeline disruption. Additionally, Simplex has introduced loop-level out-of-order execution, enabling independent loop iterations to proceed as soon as their data dependencies are met, rather than being held back by artificial order constraints.

Together, these technologies form a microarchitectural toolkit that keeps vector units fed and active—even in the face of real-world memory unpredictability.

Why It Matters for Hyperscalers

The implications of this design philosophy are especially relevant for companies building custom AI silicon—like Google’s TPU, Meta’s MTIA, and Amazon’s Trainium. While NVIDIA has pushed the envelope on HBM capacity and packaging, many hyperscalers face stricter constraints around power, die area, and system cost. For them, scaling up memory may not be a sustainable strategy.

This makes latency-tolerant architecture not just a performance booster, but a practical necessity. By improving memory utilization and compute efficiency, these innovations allow hyperscalers to extract more performance from each HBM stack, enhance power efficiency, and maintain competitiveness without massive increases in silicon cost or thermal overhead.

The Future: Smarter, Not Just Bigger

As AI workloads continue to grow in complexity and scale, the industry is rightly investing in higher-performance memory systems. But it’s increasingly clear that raw memory bandwidth alone won’t solve everything. The real competitive edge will come from architectural intelligence—the ability to keep vector engines productive even when memory stalls occur.

Latency-tolerant compute design is the missing link between cutting-edge memory technology and real-world performance. And in the race toward efficient, scalable AI infrastructure, the winners will be those who optimize smarter—not just build bigger.

Also Read:

RISC-V’s Privileged Spec and Architectural Advances Achieve Security Parity with Proprietary ISAs

Harnessing Modular Vector Processing for Scalable, Power-Efficient AI Acceleration

An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2