100X800 Banner (1)

CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Pierre Laboisse of Aledia
by Daniel Nenni on 04-08-2025 at 10:00 am

Pierre Aledia Headshot

With over 25 years of international experience in the high-tech sector, Pierre Laboisse now leads Aledia with strategic expertise. Before Aledia, he made significant contributions at Infineon, NXP, and ams OSRAM. Having served on the boards of KeyLemon and 7 Sensing Software, he demonstrates solid expertise in corporate strategy and execution. Pierre is renowned for his results-driven leadership and commitment to innovation in the microLED sector.

Pierre is leading Aledia with a vision for scaling its innovative technology to transform the global display market. Pierre brings extensive leadership experience in high-tech industries and a proven track record of driving business growth.

Tell us about your company:

Aledia is a French deep-tech technology company specialized in advanced display solutions. Originating from CEA’s research labs in 2011, it focuses on developing unique microLED technology for next-generation displays, including applications in consumer products, automotive and augmented reality. Aledia’s proprietary technology uses a new class of 3D nanowire LEDs grown on silicon, offering significant advantages in brightness, energy efficiency, and cost compared to traditional 2D microLED technologies.

What problems are you solving?

AR and consumer products simultaneously require exceptionally high brightness, energy-efficient, compact and cost-effective displays. No technology in mass production today can meet all these criteria at the same time. 3D microLED is the only viable solution to fill the requirements of this future generation of displays.

For AR in particular, tech giants are accelerating efforts in microLED technology for smart glasses, aiming for commercial launches by 2027, but hardware challenges like power consumption, bulkiness, and manufacturing costs still hinder mass adoption. After 12 years of R&D, nearly 300 patents, and $600 million in investment, Aledia has overcome the toughest hardware challenges, paving the way for the most immersive, AI-powered AR vision experiences ever conceived.

We address these needs through a proprietary process that grows GaN nanowires on standard silicon wafers, enabling 3D microLEDs with high performance and potential for lower-cost production. This combination of performance and scalability makes it more efficient and cost-effective for accelerating next-generation displays into everyday devices.

What application areas are your strongest?

We’re particularly focused on augmented reality, where top-tier display performance in a very small, power-efficient footprint is a must. Our nanowire-based microLEDs are designed to deliver high brightness and efficiency, even in challenging lighting conditions, while fitting into compact form factors.

Additional applications include consumer products (smartwatches and smartphones), automotive (dashboards and head up displays) and TVs. In fact, by having the world’s smallest, most efficient LED on 8 inches of silicon wafer, we have a technology that can lower microLED production costs. This is a very important issue for these applications, where price competition with OLED and LCD technologies is very strong.

What keeps your customers up at night?

To achieve mass adoption of augmented reality and other truly immersive experiences, our customers need displays that can handle bright outdoor settings, operate efficiently on limited power, and fit into lightweight, user-friendly designs. They want solutions that move beyond conceptual demonstrations and deliver meaningful, everyday utility. We’re honing our microLED approach so that when these products hit the market, they deliver the kind of seamless, real-world experience users genuinely value.

What does the competitive landscape look like and how do you differentiate?

There are currently players in 2D microLED technology, but these are still very expensive and not suitable for mass production. Aledia’s advantage lies in its over $200 million in-house pilot production line at the center of Europe’s “Display Valley,” enabling faster iteration without initial volume constraint. By utilizing semiconductor-grade silicon in 8-inch and 12-inch formats, Aledia lowers production costs for large-scale production of microLEDs, accelerating widespread adoption in a wide range of displays.

What new features/technology are you working on?

After 12 years of relentless R&D, a portfolio of nearly 300 patents and $600 million in investment, we’re now focused on bringing our microLEDs to market to serve customers at the cutting edge of innovation in AR, automotive, wearables and electronics. Aledia is ready and able to support customer demand ramp up to nearly 5,000 wafer starts per week.

How do customers normally engage with your company?

Customers typically collaborate with us early in the design process to integrate our microLEDs into their next-generation devices. Our in-house pilot production line allows us to accelerate development and adapt to their needs quickly. This hands-on approach ensures we’re solving the right problems and delivering impactful solutions.

Contact Aledia

Also Read:

CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Dr Greg Law of Undo


Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery

Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery
by Fred Chen on 04-08-2025 at 6:00 am

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

In a DRAM chip, the memory array contains features which are the most densely packed, but at least they are regularly arranged. Outside the array, the regularity is lost, but in the most difficult cases, the pitches can still be comparable with those within the array, though generally larger. Such features include the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits.

A key challenge is that these lines are meandering in appearance, and the pitch is varying over a range; the local max/min pitch ratio can range from ~1.4 to 2. These pitches have different focus windows [1], and in EUV lithography, these windows may be separated by more than the resist thickness [2].

Pitch uniformity within a single exposure can be attained if the layout is split accordingly for double patterning with stitching [3,4] (Figure 1). The layout is dissected into stripes of alternating color, each color assigned to one of two exposures. Features may cross stripe boundaries; in that case, the two exposures need to stitch correctly at the boundaries.

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

Alternatively, some features like diagonals may be forbidden to be stitched, resulting in a different layout split (Figure 2).

Figure 2. Alternatively splitting a metal layout for stitched double patterning for better pitch uniformity, avoiding stitching of diagonal features.

For minimum pitches above 40 nm, we expect double patterning to be sufficient with ArF immersion lithography. In case the minimum pitch is lower than this, triple patterning may be used (Figure 3) with ArF immersion lithography, as an alternative to EUV double patterning.

Figure 3. Parsing the layout of Figure 1 for triple patterning including stitching.

Previously, quadruple patterning was suggested for where the minimum line pitch is less than 40 nm [1], but it turns out triple patterning may suffice (Figure 4).

Figure 4. A quadruple patterning arrangement [1] (left) can be rearranged to support triple patterning with stitching (center) or possibly even avoid stitching (right).
In some special cases, a multiple spacer approach may be able to produce the periphery metal pattern with islands and bends with only one mask exposure [5]. However, the stitched double patterning has been the default choice for a very long time [3,4]; it should be expected to kept that way for as long as possible, even through the mid-teen nm DRAM nodes [6].

References

[1] F. Chen, Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM.

[2] A. Erdmann et al., J. Micro/Nanolith. MEMS MOEMS 15, 021205 (2016); E. van Setten et al., 2012 International Symposium on EUV Lithography.

[3] Y. Kohira et al., Proc. SPIE 9053, 90530T (2014).

[4] S-Min Kim et al., Proc. SPIE 6520, 65200H (2007).

[5] F. Chen, Triple Spacer Patterning for DRAM Periphery Metal.

[6] C-M. Lim, Proc. SPIE 11854, 118540W (2021).

Thanks for reading Multiple Patterns! Subscribe for free to receive new posts and support my work. Pledge your support.

Also Read:

A Perfect Storm for EUV Lithography

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution


Alphawave Semi is in Play!

Alphawave Semi is in Play!
by Daniel Nenni on 04-07-2025 at 10:00 am

Awave Social Post Image

We started working with Alphawave at the end of 2020 with a CEO Interview. I had met Tony Pialis before and found him to be a brilliant and charismatic leader so I knew it would be a great collaboration. Tony was already an IP legend after his company was acquired by Intel. After 4+ years at Intel Tony co-founded Alphawave in 2017. Today, Alphawave Semi is a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure. They are a publicly on the London Stock Exchange (LSE) with a recent stock price spike for obvious reasons. I have nothing but great things to say about Alphawave, absolutely.

When I first read about the exclusive OEM deal between Siemens and Alphawave I immediately thought: Why was this not an acquisition?

Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership

“Siemens Digital Industries Software is a key and trusted partner for AI and hyperscaler developers, and our agreement simplifies and speeds the process of developing SoCs for these, and other leading-edge technologies, to incorporate Alphawave Semi’s IP,” said Tony Pialis, president and CEO, Alphawave Semi. “Our technologies play a critical role in reducing interconnect bottlenecks and this collaboration greatly expands our customer reach, allowing more companies to deliver next-level data processing.”

Ever since Siemens acquired Mentor Graphics in 2017 for $4.5B they have implemented an aggressive acquisition strategy acquiring dozens of companies. We track them on the EDA Merger and Acquisition Wiki. During the day I help emerging companies with exits so I have M&A experience with the big EDA companies including Siemens. They all do it differently but I can tell you the Siemens M&A team is VERY good and they do not take being out bid lightly. I was with Solido Design and Fractal when they were acquired by Siemens and I was with Tanner EDA and Berkely Design when they were acquired by Mentor Graphics. It was a night versus day difference for the M&A processes.

The only answer I came up with as to why it was on OEM agreement versus an outright acquisition was price, Tony wanted more money. Purely speculation on my part but now that I have read that both Qualcomm and Arm might be interested in acquiring Alphawave it is more than speculation.

Qualcomm considers buying UK semiconductor firm Alphawave

Exclusive-Arm recently sought to acquire Alphawave for AI chip tech, sources say

Given that Alphawave has a lot of Intel experience inside and Lip-Bu Tan knows the importance of IP and ASIC design, maybe Intel will make an offer as well?

To be clear, Alphawave is not just an IP licensing company, they also do Chiplets and custom ASICs. In 2022 they acquired SiFive’s ASIC business for $210M. This was the former Open-Silicon that SiFive acquired in 2018 for an undisclosed amount. I worked with both Open-Silicon and eSilicon at the time. The ASIC business is a whole lot easier if you have the best IP money can buy and Alphawave does.

I remember trying to talk Mentor into entering the IP business 15 years ago but they would not budge, which was clearly a mistake. Synopsys and Cadence leverage their IP for EDA business and that puts Siemens EDA at a distinct disadvantage, thus the OEM agreement with Alphawave. IP is a big driver in the semiconductor ecosystem, just ask TSMC.

Also, Broadcom has an ASIC business, Marvell has an ASIC business, MediaTek has an ASIC business, Qualcomm does not have an ASIC business, Arm does not have an ASIC business, there is more than IP in play here.

I would bet that Siemens has some kind of Right of First Refusal tied to the OEM agreement and Tony getting offers from Qualcomm and Arm will pull that trigger. I really do not see Siemens having much of a choice but to pay a premium for Alphawave.

Exciting times in the semiconductor industry!

Also Read:

Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano

Scaling AI Data Centers: The Role of Chiplets and Connectivity

How AI is Redefining Data Center Infrastructure: Key Innovations for the Future


Even HBM Isn’t Fast Enough All the Time

Even HBM Isn’t Fast Enough All the Time
by Jonah McLeod on 04-07-2025 at 6:00 am

BW V Latency

Why Latency-Tolerant Architectures Matter in the Age of AI Supercomputing

High Bandwidth Memory (HBM) has become the defining enabler of modern AI accelerators. From NVIDIA’s GB200 Ultra to AMD’s MI400, every new AI chip boasts faster and larger stacks of HBM, pushing memory bandwidth into the terabytes-per-second range. But beneath the impressive specs lies a less obvious truth: even HBM isn’t fast enough all the time. And for AI hardware designers, that insight could be the key to unlocking real performance.

The Hidden Bottleneck: Latency vs Bandwidth

HBM solves one side of the memory problem—bandwidth. It enables thousands of parallel cores to retrieve data from memory without overwhelming traditional buses. However, bandwidth is not the same as latency.

Even with terabytes per second of bandwidth available, individual memory transactions can still suffer from delays. A single miss in a load queue might cost dozens of clock cycles. The irregular access patterns typical of attention layers or sparse matrix operations often disrupt predictive mechanisms like prefetching. In many systems, memory is shared across multiple compute tiles or chiplets, introducing coordination and queuing delays that HBM can’t eliminate. And despite the vertically stacked nature of HBM, DRAM row conflicts and scheduling contention still occur.

In aggregate, these latency events create performance cliffs. While the memory system may be technically fast, it’s not always fast enough in the precise moment a compute engine needs data—leading to idle cycles in the very units that make these chips valuable.

Vector Cores Don’t Like to Wait

AI processors, particularly those optimized for vector and matrix computation, are deeply dependent on synchronized data flow. When a delay occurs—whether due to memory access, register unavailability, or data hazards—entire vector lanes can stall. A brief delay in data arrival can halt hundreds or even thousands of operations in flight.

This reality turns latency into a silent killer of performance. While increasing HBM bandwidth can help, it’s not sufficient. What today’s architectures truly need is a way to tolerate latency—not merely race ahead of it.

The Case for Latency-Tolerant Microarchitecture

Simplex Micro, a patent-rich startup based in Austin, has taken on this challenge head-on. Its suite of granted patents focuses on latency-aware instruction scheduling and pipeline recovery, offering mechanisms to keep compute engines productive even when data delivery lags.

Among their innovations is a time-aware register scoreboard, which tracks expected load latencies and schedules operations accordingly avoiding data hazards before they occur. Another key invention enables zero-overhead instruction replay, allowing instructions delayed by memory access to reissue cleanly and resume without pipeline disruption. Additionally, Simplex has introduced loop-level out-of-order execution, enabling independent loop iterations to proceed as soon as their data dependencies are met, rather than being held back by artificial order constraints.

Together, these technologies form a microarchitectural toolkit that keeps vector units fed and active—even in the face of real-world memory unpredictability.

Why It Matters for Hyperscalers

The implications of this design philosophy are especially relevant for companies building custom AI silicon—like Google’s TPU, Meta’s MTIA, and Amazon’s Trainium. While NVIDIA has pushed the envelope on HBM capacity and packaging, many hyperscalers face stricter constraints around power, die area, and system cost. For them, scaling up memory may not be a sustainable strategy.

This makes latency-tolerant architecture not just a performance booster, but a practical necessity. By improving memory utilization and compute efficiency, these innovations allow hyperscalers to extract more performance from each HBM stack, enhance power efficiency, and maintain competitiveness without massive increases in silicon cost or thermal overhead.

The Future: Smarter, Not Just Bigger

As AI workloads continue to grow in complexity and scale, the industry is rightly investing in higher-performance memory systems. But it’s increasingly clear that raw memory bandwidth alone won’t solve everything. The real competitive edge will come from architectural intelligence—the ability to keep vector engines productive even when memory stalls occur.

Latency-tolerant compute design is the missing link between cutting-edge memory technology and real-world performance. And in the race toward efficient, scalable AI infrastructure, the winners will be those who optimize smarter—not just build bigger.

Also Read:

RISC-V’s Privileged Spec and Architectural Advances Achieve Security Parity with Proprietary ISAs

Harnessing Modular Vector Processing for Scalable, Power-Efficient AI Acceleration

An Open-Source Approach to Developing a RISC-V Chip with XiangShan and Mulan PSL v2


Podcast EP281: A Master Class in the Evolving Ethernet Standard with Jon Ames of Synopsys

Podcast EP281: A Master Class in the Evolving Ethernet Standard with Jon Ames of Synopsys
by Daniel Nenni on 04-04-2025 at 10:00 am

Dan is joined by Jon Ames, principal product manager for the Synopsys Ethernet IP portfolio. Jon has been working in the communications industry since 1988 and has led engineering and marketing activities from the early days of switched Ethernet to the latest data center and high-performance computing Ethernet technologies.

Dan explores the history of the Ethernet standard with Jon, who provides an excellent overview of how the standard has evolved to deliver the high-performance, low latency and low power capabilities demanded by contemporary AI-based systems. Jon explains the enduring compatibility of the standard, allowing multiple generations to coexist in legacy and well as new systems.

Jon spends some time explaining the impact the Ultra Ethernet standard is having on advanced systems in terms of capabilities such as network utilization and latency. How current and future Ethernet standards will impact the industry is also explored. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Cyril Sagonero of Keysom
by Daniel Nenni on 04-04-2025 at 6:00 am

Portrait Cyril

Cyril Sagonero is the CEO and co-founder of Keysom, a deeptech company focused on RISC-V custom processor. In 2019, he founded Keysom with Luca TESTA to address inefficiencies in off-the-shelf processors, developing tailored solutions for various industries. Under his leadership, the company secured €4 million in funding in September 2024 to advance its technology and expand internationally. Previously, he co-founded Koncepto, specializing in hardware and software development, and worked as a lecturer and pedagogical manager at ESTEI in Bordeaux, focusing on electronics and embedded systems.

Tell us about your company.
Keysom is a French startup funded in 2022, specializing in the architecture of processor cores based on the RISC-V Instruction Set Architecture (ISA).

Our mission is to empower companies by providing RISC-V IP with a no-code architectural exploration tool that automatically customize processors tailored to specific application requirements. This approach ensures optimal power, performance, and area (PPA) trade-offs, enabling industries to create processors without necessitating in-depth technical expertise.

With the support of organizations like Alpha-RLH, ADI, Unitec, Région Nouvelle-Aquitaine, and BPIFrance, Keysom is committed to advancing processor design autonomy and efficiency within the semiconductor industry

What problems are you solving?
At Keysom, we address the growing demand for customized processor architectures in an era where performance efficiency, power consumption, and cost optimization are critical.

Traditional off-the-shelf processors often come with unnecessary features that increase area usage, power consumption, and cost—a significant issue for industries needing embedded systems and edge computing solutions. We solve this by offering application-specific RISC-V processor designs that precisely align with our clients’ performance and power requirements.

Our no-code architectural exploration platform enables semiconductor companies to automatically generate optimized processor cores without requiring deep hardware design expertise. This solution accelerates time-to-market, reduces engineering costs, and improves energy efficiency—key challenges in markets like IoT, AI, automation, and robotics.

By leveraging dynamic reconfigurability and custom instruction sets, we empower companies to create processors that are uniquely tailored to their applications, unlocking better performance-per-watt and cost-effectiveness than traditional solutions.

What application areas are your strongest?
Our strongest application areas are IoT, Edge AI, and critical industrial systems.

In IoT, our customizable RISC-V processors deliver ultra-low power consumption and optimized performance for smart sensors, connected devices, and autonomous systems.

For Edge AI, we provide tailored architectures that accelerate AI inference tasks directly at the cutting-edge, enabling real-time decision-making with minimal latency and energy efficiency.

In critical industrial systems, such as robotics, automation, and embedded control systems, our dynamic processor designs ensure high reliability, real-time performance, and long product lifecycles—essential requirements for mission-critical applications.

What keeps your customers up at night?
Our customers are primarily focused on achieving the right balance of performance, cost and optimization. They need processors that deliver high performance while also being energy-efficient and cost-effective. However, they also seek tools that are open, easy to use, and capable of supporting customization without the steep learning curve.

The RISC-V open-standard is crucial to them as it enables innovation and provides the flexibility to design processors tailored to their specific needs, without being locked into proprietary solutions.

At the same time, customers are increasingly looking for ways to reduce dependency on large EDA providers. By providing a no-code platform for designing RISC-V-based processors, we offer a path to greater autonomy and the ability to innovate without the heavy reliance on expensive, complex design tools. This gives them more control over their designs, improving both agility and cost-effectiveness

What does the competitive landscape look like and how do you differentiate?
The competitive landscape in the semiconductor and RISC-V industries is extremely challenging. We are a French company, and we compete on a global scale, facing some of the largest and most established players in our industry.

However, we have several unique advantages that help us differentiate ourselves. Firstly, we benefit from the strong support of European investments, which provide us with the resources to accelerate our growth and scale innovation.

Additionally, our technology and IP Core offer immediate value to our customers. By providing customizable RISC-V solutions that are directly aligned with application-specific needs, we enable faster time-to-market, better power performance, and cost efficiency—qualities that set us apart in a highly competitive environment.

What new features/technology are you working on?
We are constantly working on advancing our technology to meet the evolving needs of our customers. Currently, we are focusing on developing even more optimized 32-bit cores, which will offer better performance and efficiency for a wide range of applications.

Additionally, we are excited about our upcoming Edge AI accelerator, which is specifically designed to handle the growing demand for real-time AI inference at the edge. This product will enable our customers to deploy AI models directly in distributed environments, with low latency and low power consumption.

Lastly, a significant area of research for us is reconfigurable architectures. We’re exploring how to create architectures that can dynamically adapt to different workloads and requirements, offering greater flexibility and optimization for diverse applications. This innovation will allow us to provide more customizable and adaptive solutions to our customers in industries like IoT, AI, and industrial automation.

How do customers normally engage with your company?
Customers typically engage with us through multiple channels. We are actively building a network of sales and representatives across Europe, the USA, and Asia to better serve our global customer base.

We also meet our customers in person during major industry events, such as the Embedded World in 2025 in Germany.

In addition, we believe in sharing knowledge and insights with the broader community, so we actively publish and share our expertise to engage with professionals and experts in our field.

Finally, to provide hands-on experience with our solutions, we offer a free trial of our EDA Cloud Keysom Core Explorer. This gives potential customers the opportunity to explore our platform and see firsthand how it can benefit their projects. For immediate access, feel free to contact me directly, and I’ll be happy to assist.

Contact Keysom

Also Read:

CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Dr Greg Law of Undo

CEO Interview with Jonathan Klamkin of Aeluma


A Synopsys Webinar Detailing IP Requirements for Advanced AI Chips

A Synopsys Webinar Detailing IP Requirements for Advanced AI Chips
by Mike Gianfagna on 04-03-2025 at 10:00 am

A Synopsys Webinar Detailing IP Requirements for Advanced AI Chips

Generative AI is dramatically changing the compute power that must be delivered by advanced designs. This demand has risen by more than 10,000 times in the past five to six years.  This increased demand has impacted the entire SoC design flow. We are now faced with going beyond 1 trillion transistors per chip, and systems now consist of many chips in a highly sophisticated package. Synopsys recently presented a webinar on these trends and offered some excellent strategies to tame the on-going demands of GenAI. A link to the replay is coming but first let’s examine some of the topics addressed in the Synopsys webinar detailing IP requirements for advanced AI chips.

You can access the webinar replay here.

About the Presenter
Dr. Manuel Mota

The webinar is presented by Dr. Manuel Mota, senior product manager responsible for the die-to-die interface IP product line at Synopsys. Manual has been with Synopsys for over 15 years. Previously, he held leadership roles at MIPS Technologies and Chipidea. He began his career as a researcher on microelectronics for high energy physics experiments at CERN, the European Organization for Nuclear Research.

Dr. Mota has authored multiple technical papers on multi-die design. He is clearly an expert in this area and the perfect person to present this webinar.

About the Webinar

Manuel begins the webinar with an overview of the impact generative AI has had on technology evolution. He shares many insights on the trends underway. The data is quite compelling. Here are just a few of the many items discussed:

  • Larger compute chips
    • Monolithic -> multi-die
    • Advanced nodes for compute die
  • Advanced packaging
    • RDL interposers spanning multiple reticles
    • 2D/2.5D -> 3D/3.5D
  • Memory architectures
    • cHBM, memory stacking
  • Die-to-die bandwidth increasing
    • 16G  -> 32G  -> 64Gbps/pin

Manuel then discusses SoC architectures that focus on higher compute performance. Included in this discussion are central I/O hub, homogeneous, cHBM extension and 3D stacking. Manuel then does a deep dive into three key technologies that show promise to deliver the required performance for multi-die designs.

1) Die-to-Die Communication

40/64G speeds can provide a 4X bandwidth advantage. Manuel discusses the standards that are driving increases in die-to-die communication. UCIe is a key driver here, and he spends some time explaining the various UCIe generations along with the benefits and challenges.

He then discusses Synopsys IP solutions for 64Gbps die-to-die communication. Key points here include:

  • Lightweight implementation optimized for captive systems
  • Differentiated feature set:
    • Data rate up to 64Gbps/pin & low power modes and knobs
    • Extensive testability, bring-up/debug & reliability functionality
    • Lite FEC on link adapter for low latency error correction
  • Modular architecture enables adaptation to system requirements
  • Support streaming protocols
  • Supports all package technologies (2D, 2.5D)
  • Leverages Synopsys UCle 40G expertise & track record

Manuel gets into the details of how this IP is used in real world applications, with detailed information on technical capabilities and examples of delivered performance.

2) Custom HBMs

This approach can provide a 2X memory bandwidth advantage and more flexibility. Manual discusses some details of custom HBMs enabled with the logic process on the base die and how this opens the path for novel & efficient SoC memory architectures. He discusses the many benefits this approach offers. Some of the benefits covered include:

  • Extend usage of HBM base die
  • Multiple memory tiers reuse same die edge
  • Offload some calculations to chiplet
  • Isolates host from memory technology
  • Extensible to other memory types

Manuel covers other benefits and strategies associated with this approach as well.

3) 3D Stacking

80% power savings are possible here, with an approach that delivers higher integration, shorter wiring and a more compact form factor. 3D stacking is clearly enabling the next generation of high performance SoCs. Manuel presents some commercial examples of this technology from Intel and AMD.

He dives into the architectural challenges that must be met to achieve a successful 3D implementation. Multi-die interconnect pitch scaling is a critical item here. He points out that shorter interconnects lead to lower die-to-die interface area/power/latency. Manual covers the details of several approaches for multi-die interconnect as summarized in the diagram below.

Multi die interconnect approaches

Multi-die assembly options from 2D to 3.5D are covered in detail. Manual also explores the design challenges and benefits of 3D stacking. Both face-to-back and face-to-face challenges and benefits are covered. How the various UCIe standards fit into each approach is also covered.  

It’s clear that one approach doesn’t fit every project. Manuel spends some time explaining the various approaches that are available for multi-die implementation and how to choose the right one for your project. He concludes the webinar presentation with an overview of the comprehensive set of solutions that Synopsys provides to enable your journey to heterogeneous integration. A summary of this discussion is shown in the figure below.

Synopsys Multi Die Solution

The webinar concludes with an informative live Q&A session from audience questions.

To Learn More

It seems that every design team is feeling the power, performance and cost demands of advanced AI technology. If you are considering a multi-die approach, this webinar is a must-see event. Dr. Manuel Mota has deep experience with this type of design style, and he does a great job sharing ideas and technical approaches. You will learn something new.  You can access the replay of the webinar here. And that’s how you can access the Synopsys webinar detailing IP requirements for advanced AI chips.

Also Read:

Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures


A Perfect Storm for EUV Lithography

A Perfect Storm for EUV Lithography
by Fred Chen on 04-03-2025 at 6:00 am

EUV Made Easy 1

Electron blur, stochastics, and now polarization, are all becoming stronger influences in EUV lithography as pitch continues to shrink

As EUV lithography continues to evolve, targeting smaller and smaller pitches, new physical limitations continue to emerge as formidable obstacles. While stochastic effects have long been recognized as a critical challenge [1,2], and electron blur more recently has been considered in depth [3], polarization effects [4,5] are now becoming a growing concern in image degradation. As the industry moves beyond 2nm node, these influences create a perfect storm that threatens the quality of EUV-printed features. Loss of contrast from blur and polarization make it more likely for stochastic fluctuations to cross the printing threshold [3].

Figure 1 shows the combined effects of polarization, blur, and stochastics for 18 nm pitch as expected on a 0.55 NA EUV lithography system. Dipole-induced fading [6] is ignored as a relatively minor effect. There is a 14% loss of contrast if unpolarized light is assumed [5], but electron blur has a more significant impact (~50% loss of contrast) in aggravating stochastic electron behavior in the image. The total loss of contrast is obtained by multiplying the contrast reduction from polarization by the contrast reduction from electron blur.

Figure 1. 9 nm half-pitch image as projected by a 0.55 NA 13.5 nm wavelength EUV lithography system. No dipole-induced image fading [6] is included. The assumed electron blur is shown on the right. The stochastic electron density plot in the center assumes unpolarized light (50% TE, 50% TM) [5]. A 20 nm thick metal oxide resist (20/um absorption) was assumed.
The edge “roughness” is severe enough to count as being defective. The probability of a stochastic fluctuation crossing the printing threshold is not negligible. As pitch decreases, we should expect this to grow worse, due to the more severe impact of electron blur [3] as well as the loss of contrast for unpolarized light [4,5] (Figure 2).

Figure 2. Reduction of image contrast worsens with smaller pitch. The stochastic fluctuations in electron density also grow correspondingly more severe. Aside from pitch, the same assumptions were used as in Figure 1.

Note that even for the 14 nm pitch case, the 23% loss of contrast from going from TE-polarized to unpolarized is still less than the loss of contrast from electron blur (~60%). As pitch continues to decrease, the polarization contribution will grow, along with the increasing impact from blur. As noted in the examples considered above, although polarization is recognized within the lithography community as a growing concern, the contrast reduction from electron blur is still more significant. Therefore, we must expect any useful analysis of EUV feature printability and stochastic image fluctuations to include a realistic electron blur model.

References

[1] P. de Bisschop, “Stochastic effects in EUV lithography: random, local CD variability, and printing failures,” J. Micro/Nanolith. MEMS MOEMS 16, 041013 (2017).

[2] F. Chen, Stochastic Effects Blur the Resolution Limit of EUV Lithography.

[3] F. Chen, A Realistic Electron Blur Function Shape for EUV Resist Modeling.

[4] F. Chen, The Significance of Polarization in EUV Lithography.

[5] H. J. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[6] T. A. Brunner, J. G. Santaclara, G. Bottiglieri, C. Anderson, P. Naulleau, “EUV dark field lithography: extreme resolution by blocking 0th order,” Proc. SPIE 11609, 1160906 (2021).

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work.

Thanks for reading Exposing EUV! Subscribe for free to receive new posts and support my work. Pledge your support

Also Read:

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Rethinking Multipatterning for 2nm Node


Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher

Podcast EP280: A Broad View of the Impact and Implications of Industrial Policy with Economist Ian Fletcher
by Daniel Nenni on 04-02-2025 at 10:00 am

Dan is joined by economist Ian Fletcher. Ian is on the Coalition for a Prosperous America Advisory Board. He is the author of Free Trade Doesn’t Work , coauthor of The Conservative Case against Free Trade, and his new book Industrial Policy for the United States Winning the Competition for Good Jobs and High-Value Industries. He has been senior economist at the Coalition, a research fellow at the US Business and Industry Council, an economist in private practice, and an IT consultant.

In this far-reaching and insightful discussion, Dan explores the history, impact and future implications of the industrial policies of the US and other nations around the world with Ian. Ian explains the beginnings of industrial policy efforts in the US and the impact these programs have had across a wide range of technologies and industries. Ian provides his views of what has worked and what needs re-focus to achieve the desired results.

Through a series of historical and potential future scenarios Ian illustrates the complexity of industrial policy and the substantial impacts it has had on the world around us.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Big Picture PSS and Perspec Deployment

Big Picture PSS and Perspec Deployment
by Bernard Murphy on 04-02-2025 at 6:00 am

semiconductor design realization

I met Moshik Rubin (Sr. Group Director, Product Marketing and BizDev in the System Verification Group at Cadence) at DVCon to talk about PSS (the Portable Stimulus Standard) and Perspec, Cadence’s platform to support PSS.  This was the big picture view I was hoping for, following more down in the details views from earlier talks.

The standard and supporting tools can do many things but all technologies have compelling sweet spots, something you probably couldn’t do any other way. Moshik provided some big picture answers to this question in what Advantest and Qualcomm are doing today. Both have built bridges in testing objectives, in one case for hardware/software integration, in the other case between pre- and post-silicon testing. Each providing a clear answer to the question: “where is PSS the only reasonable solution?”

Qualcomm automating hardware/software integration testing

Memory-mapped hardware (most hardware these days) interacts with embedded hardware functions (video, audio, AI, etc) through memory-mapped registers. A register has an address in the memory map along with a variety of properties; software interacts with the hardware by writing/reading this address. This interface definition is the critical bridge between hardware and software and must be validated thoroughly.

I remember many years ago system AEs wrote apps to generate these definitions as header files and macros, together with documentation to guide driver/firmware developers. As the design evolved, they would update the app to reflect changes. This worked well, but the bridge was manually built and maintained. As the number of registers and properties on those registers grew, opportunities for mistakes also grew. (One of my favorites, should a flag be implemented as “read” or “clear on read”? Clear on read seems an easy and fast choice but can hide some difficult bugs.)

Qualcomm chose to automate this testing through a single source of truth flow based on PSS and Perspec. They first develop PSS descriptions of use-case scenarios and leaf-level (atomic) behaviors, abstracted from detailed implementation, then develop test realizations (mapping the PSS level to target test engine) for each target. These are a native mode (C running on the host processor interacting with the rest of the SoC), a UVM mode which can interact directly with a UVM testbench, and a firmware reference mode which generates documentation to be used by driver/software developers. As the design evolves, the PSS definition is updated (intentionally, or to fix bugs exposed in regression testing), and all these levels are updated in sync.

Incidentally, I know as I’m sure Qualcomm knows that there are already tools to build register descriptions, header files, and test suites. I see Qualcomm’s approach as complementary. They need PSS suites to test across the vertical range of design applications and to define synthetic tests which must probe system-level behaviors not fully comprehended in register descriptions. Seems like an opportunity for those register tools to integrate in some way with this PSS direction.

This is a big step forward from the ad-hoc support I remember.

Advantest automating pre-/post-silicon testing

Advantest showed a demo of their flow at DVCon, apparently very well attended. Connecting pre- and post-silicon testing seems to be a hot button for a lot of folks. Historically it has been difficult to automate a bridge between these domains. Pre-silicon verification could generate flat files of test vectors that could be run on an ATE tester or in a bench setup, but that was always cumbersome and limited. Now Cadence (and others) have worked with Advantest to directly use the PSS developed in pre-silicon testing for post-silicon validation. The Advantest solution (SiConic) unifies pre-silicon and post-silicon in an automated, and versatile environment by connecting the device functional interfaces (USB, PCIe, ETH) to external interfaces such as JTAG, SPI, UART, I2C, enabling rich PSS content to execute directly against silicon. That’s a major advance for post silicon testing, now advancing beyond post-silicon exercisers in the complexity of tests that can be run, and in helping to help isolate root causes for failures.

I should add one more important point. It seems tedious these days to say that development cycles are being squeezed hard, but for the hyperscalers and other big system vendors this has never been more true. They are tied to market and Wall Street cycles, requiring that they deliver new advances each year. That puts huge pressure on all in-house development, on test development as much as design development. Anywhere design teams can find canned, proven content, they are going to snatch it up. In test they are looking for more test libraries, VIP, and system VIP. Perspec is supported by extensive content for Arm, RISC-V, and x86 platforms, including System VIP building blocks for system testbench generation, traffic generation, performance analysis and score boarding.

You can learn more about Cadence Perspec HERE.

Also Read:

Metamorphic Test in AMS. Innovation in Verification

Compute and Communications Perspectives on Automotive Trends

Bug Hunting in Multi Core Processors. Innovation in Verification