SILVACO 073125 Webinar 800x100

A Perspective on AI Opportunities in Software Engineering

A Perspective on AI Opportunities in Software Engineering
by Bernard Murphy on 04-16-2025 at 6:00 am

ai agents software engineering

Whatever software engineering teams are considering around leveraging AI in their development cycles should be of interest to us in hardware engineering. Not in every respect perhaps but there should be significant commonalities. I found a recent paper on the Future of AI-Driven Software Engineering from the University of Aukland, NZ with some intriguing ideas I thought worth sharing. The intent of these authors is to summarize high-level ideas rather than algorithms, though there are abundant references to papers which on a sample review do get into more detail. As this is a fairly long paper, here I just cherry-pick a few concepts that stood out for me.

Upsides

In using LLMs for code generation the authors see increased emphasis on RAG (retrieval augmented generation) for finding code snippets versus direct code synthesis from scratch. They also share an important finding in a blog post from StackOverflow, reporting that blog hits on their website are declining. This is significant since StackOverflow has been a very popular source for exchanging ideas and code snippets. StackOverflow attribute the decline to LLMs like GPT4 both summarizing a response to a user prompt and directly providing code snippets. Such RAG-based systems commonly offer links to retrieved sources but clearly these are not compelling enough to keep up website hits. I find the same experience with Google search, where now search results often start with an AI-generated overview. I often (not always) find this useful, also I often don’t follow the links.

Meanwhile Microsoft reports that the GitHub CoPilot (a Microsoft product) paid customer base is growing 30% quarter on quarter, now at 1.3M developers in 50K organizations. Clearly for software development the ease of generating code through CoPilot has enough appeal to extract money from subscribers.

Backing up a step, before you can write code you need a clear requirements specification. Building such a specification can be a source of many problems, mapping from a client’s mental image of needs to an implementer’s image in natural language, with ambiguities, holes and the common reality of an evolving definition. AI-agents could play a big role here by interactively eliciting requirements, proposing examples and scenarios to help resolve ambiguities and plug holes. Agents can also provide some level of requirements validation, by identifying vague or conflicting requirements.

Maintaining detailed product documentation as development progresses can be a huge burden on developers and that documentation can easily drift out of sync with the implemented reality especially through incremental changes and bug fixes. The authors suggest this tedious task could be better handled through agent-based generation and updates, able to stay in sync with every large or small change. Along similar lines, not everyone in the product hierarchy will want detailed implementation doc. Product managers, AEs, application developers, and clients all need abstracted views best suited to their individual interests. Here also there is opportunity for LLMs to generate such abstractions.

Downsides

The obvious concern with AI generated code or tests is the hallucination problem. While accuracy will no doubt improve with further training, it is unrealistic to expect high certainty responses to every possible prompt. Hallucinations are more a feature than a bug, no matter how extensive the training.

Another problem is over-reliance on AI. As developers depend more on AI-assisted answers to their needs, there is a real concern that their problem-solving and critical thinking skills will decline over time. Without expert human cross checks, how do we ensure that AI-induced errors do not leak through to production? A common response is that the rise of calculators didn’t lead to innumeracy, they simply made us more effective. By implication AI will reach that same level of trust in time. Unfortunately, this is a false equivalence. Modern calculators produce correct answers every time; there is no indication that AI can rise to this level of certainty. If engineers lose the ability to spot errors in AI claims for such cases, quality will decline noticeably, even disastrously. (I should stress that I am very much a proponent of AI for many applications. I am drawing a line here for unsupervised AI used for applications requiring engineering precision.)

A third problem will arise as more of the code used in training and RAG is itself generated by AI. The “genotype” of this codebase will fail to weed out weak/incorrect suggestions unless some kind of Darwinian stimulus is added to the mix. Reinforcement based learning could be a part of the answer to improve training, but this won’t fix stagnation in RAG evolution. Worse yet, experts won’t be motivated to add new ideas (and where would they add them?) if recognition for their contribution will be hidden behind an LLM response. I didn’t see an answer to this challenge in the paper.

Mitigating Downsides

The paper underlines the need for unit testing unconnected to AI. This is basic testing hygiene – don’t have the same person (or AI) both develop and test. I was surprised that there was no mention of connecting requirements capture to testing since those requirements should provide independent oracles for correct behavior. Perhaps that is because AI involvement in requirements capture is still mostly aspirational.

One encouraging idea is to lean more heavily on metamorphic testing, something I have discussed elsewhere. Metamorphic testing checks relationships in behavior which should be invariant through low-level changes in implementation or in use-case tests. If you detect differences in such a relation during testing, you know you have an error in the design. However finding metamorphic relations is not easy. The authors suggest that AI could uncover new relations, as long as each such suggestion is carefully reviewed by an expert. Here the expert must ask if an apparent invariant is just an accident of the testing or something that really is an invariant, at least in the scope of usage intended for the product.

Thought-provoking ideas, all with relevance to hardware design.

Also Read:

The Journey of Interface Protocols: Adoption and Validation of Interface Protocols – Part 2 of 2

EDA AI agents will come in three waves and usher us into the next era of electronic design

Beyond the Memory Wall: Unleashing Bandwidth and Crushing Latency


CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Ronald Glibbery of Peraso
by Daniel Nenni on 04-15-2025 at 10:00 am

Ron Glibbery high res

Mr. Glibbery leads all functional areas of Peraso Inc. and has served as chief executive officer since December 2021. He co-founded Peraso Technologies Inc. in 2009 and previously served as its chief executive officer. Prior to co-founding Peraso Technologies, Mr. Glibbery was President of Intellon, a pioneer and leader in the development of semiconductor devices used for powerline communications. Previously, Mr. Glibbery was a member of the management team of LSI Logic, Canada.

Tell us about your company?

Peraso Inc. (“Peraso”) is a global leader in the development and high-volume deployment of semiconductor solutions for the unlicensed 60 GHz (mmWave) spectrum. With a focus on high-performance, scalable wireless technologies, Peraso serves a diverse range of markets, including fixed wireless access (FWA), aerospace and defense, transportation communications, and professional video delivery.

What problems are you solving?
    • Providing affordable, reliable, connectivity in challenging urban environment
      • A major benefit of mmwave technology is that it utilizes beamforming, or the ability to focus the radio energy into a narrow beam. Thus many mmwave networks can coexist in a dense user environment because adjacent beams do not interfere with each other. This is in stark contrast to traditional wireless technology where there is substantial overlap between adjacent networks, thus rendering traditional wireless technology unsuited for dense deployments.
    • Meeting performance of Wired/Fiber infrastructure with better physical security and lower cost
      • The Peraso 60GHz technology is able to operate at data rates of 3Gbps, competing favorably with the premium data rate of 1Gbps offered by fiber operators. Further, due to the utilization of beamforming technology, 3rd party snooping is very difficult, thus providing carriers with a fundamental level of security at the physical layer.
    • Open to any operator without spectrum acquisition or costly 4G/5G equipment
      • The 60GHz spectrum does not require a license to operate, and therefore operations do not need to utilize the significant capital required for licensed bands. This is a valid deployment model, as beamforming enables the use of simultaneous transmissions in a common environment.
    • Overcoming congestion in Sub-6 GHz Wi-Fi networks
      • Same as item 1
    • Maintaining service during frequent power outages
      • A major advantage Peraso provides is the ability to operate with relatively modest power consumption. In many or our jurisdictions, the electrical power grid is unreliable, and as such, many of our customers use batteries in conjunction with solar cells without relaying on the electrical power grid.
What application areas are your strongest?

mmWave:

Peraso Inc. is a global leader in the development and high-volume deployment of semiconductor solutions for the unlicensed 60 GHz (mmWave) spectrum.

Tactical Communications:

We have also secured significant interest and traction in the defense / tactical communications space as a result of our technologies inherently Stealthy protocol, Low Probability of Interception (LPI), ease of deployment and ability to deliver gigabit speeds to all platforms.

Transportation:

Peraso has demonstrated the benefits of mmWave radio systems across several transportation platforms. Most recently, Peraso has developed a High Velocity Roaming (HVR) operating mode for its 60 GHz modules. HVR ensures that data terminals located along the rail side are able to track the fast-moving train and that the train’s terminals are able to seamless switch between connection points in a make-before-break sequence. HVR also provides system scalability. Each channel provides up to 3 Gbps traffic bandwidth and multiple channels can be aggregated for scalability. Peraso’s HVR can provide riders with faster, more reliable data which will meet foreseeable future demands.

What keeps your customers up at night?

Ultimately our customers are the service providers who provide broadband internet services. Their number one concern is making sure their customers have reliable internet service. Peraso provides critical technology to help their concerns. This includes primary features such as high reliability in rain or shine, low power operation, unlikely interference, and high security.

What does the competitive landscape look like and how do you differentiate?

Peraso is the only semiconductor supplier that provides OEM’s with a complete 60GHz system solution. This includes RF, signal processing, comprehensive software support and phased array antenna technology.

What new features/technology are you working on?
    • Reduced power consumption
    • AP-AP roaming
    • Increased user support
    • Broader antenna technology
How do customers normally engage with your company?

New partners and existing customers wishing to engage with us are encouraged to visit our website at https://perasoinc.com/contact-us/.

Also Read:

CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Matthew Stephens of Impact Nano


Design IP Market Increased by All-time-high: 20% in 2024!

Design IP Market Increased by All-time-high: 20% in 2024!
by Eric Esteve on 04-14-2025 at 10:00 am

Top5 License

Design IP revenues achieved $8.5B in 2024 and this is an all-time-high growth of 20%. Wired Interface is still driving Design IP growth with 23.5% but we see the Processor category also growing by 22.4% in 2024. This is consistent with the Top 4 IP companies made of ARM (mostly focused on processor) and a team leading wired interface category, Synopsys, Cadence and, Alphawave. The top 4 vendors are even growing more than the market (more in the 25% growth range) and represent a total of 75% in 2024 compared to 72% share in 2023.

Their preferred target is mobile computing for ARM and High Performance Computing (HPC) applications for the #2, #3 and #4 IP companies. The preferred IP for HPC segment are based on interconnect protocols like PCIe and CXL, Ethernet and SerDes, Chip to Chip (UCIe) and DDR memory controller including HBM. Let’s add that they position advanced solutions (technology node) vendors able to catch the needs of AI hyperscaler developers, even if Synopsys also target the main market and de facto enjoy larger revenues.

IPnest has released the “Design IP Report” in April 2025, ranking IP vendors by category and by nature, license and royalty.

How can the Design IP market in 2024 be consistent with the semiconductor market behavior? Looking at TSMC revenues by platform in Q42024, we see that HPC at 53%, smartphone 35%, IoT 5%, automotive 4%, others 3%. By platform, revenue from HPC, Smartphone, IoT, Automotive and DCE increased 58%, 23%, 2%, 4%, and 2% respectively from 2023, while Others decreased.

In 2024, the IP market was strongly driven by vendors supporting HPC applications selling wired interface (Synopsys, Cadence, Alphawave and Rambus) but also by vendors selling CPU and GPU for smartphone (ARM and Imagination Technology). The IP market perfectly mimics the semiconductor market, most of the year to year growth is coming from a single segment, HPC (even if ARM’s performance is to be noticed with 26% YoY growth).

Looking at the 2016-2024 IP market evolution can bring interesting information about the main trends. Global IP market has grown by 145% when Top 3 vendors have seen unequal growth. The #1 ARM grew by 124% when the #2 Synopsys grew by 326% and Cadence #3 by 321%.

Market share information is even more significant. ARM moved from 48.1% in 2016 to 44% in 2024 when Synopsys enjoy a growth from 13.1% to 23%.

This can be synthetized with the comparison of 2016 to 2024 CAGR:

      • Synopsys CAGR          19%
      • Cadence CAGR            19%
      • ARM CAGR                     9%

IPnest has also calculated IP vendors ranking by License and royalty IP revenues:

Synopsys is the clear #1 by IP license revenues with 32% market share in 2024, when ARM is #2 with 30%.

Alphawave, created in 2017, is now ranked #4 just behind Cadence, showing how high performance SerDes IP is essential for modern data-centric application and to build performant interconnect IP portfolio supporting growth from 0 to over $270 million in 7 years. Reminder: “Don’t mess with SerDes!”

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve (eric.esteve@ip-nest.com)

Also Read:

Balancing the Demands of OTP for Advanced Nodes with Synopsys IP

Alphawave Semi is in Play!

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation


Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight

Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight
by Daniel Nenni on 04-11-2025 at 10:00 am

Dan is joined by Matthew Ozalas, a distinguished RF engineer at Keysight Technologies. With extensive experience in RF and microwave engineering, Matthew has made significant contributions to the field, particularly in the design and development of RF power amplifiers. His expertise spans hardware and software applications as well as design and automation.

In this insightful discussion, Dan explores the realm of RF/high frequency design with Mathew, who describes some of the unique requirements of this class of design. The impact of AI is explored as well. Matthew explains how the massive data available needs to be harnessed. The methods he describes are different from mainstream digital design. The strategies to build new, AI-assisted design flows are also explored. Matthew describes the importance of using the same analysis technologies across all phases of the design, from chip to package to system. He also describes the work going on at Keysight to enable novel, new AI-assisted design flows for high frequency design and what the future will look like.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Generative AI Comes to High-Level Design

Generative AI Comes to High-Level Design
by Daniel Payne on 04-10-2025 at 10:00 am

high level agents min

I’ve watched the EDA industry change the level of design abstraction starting from transistor-level to gate-level, then RTL, and finally using High Level Synthesis (HLS). Another emerging software trend is the use of generative AI to make coding RTL more automated. There’s a new EDA company called Rise Design Automation that enables design and verification beyond RTL, so I attended their recent webinar to learn more about what they have to offer.

Ellie Burns started out with an overview of the semiconductor market and how trends like AI/ML, 5G, IoT and hardware accelerators are driving then landscape. RTL design techniques and IP reuse have done OK, but there is an insatiable demand for new designs and larger designs amidst a general engineering shortage.

What Rise offers is a new, three-prong approach to meet the market challenges:

Their generative AI is an assistant that automatically creates SystemVerilog, C++ or SystemC code based on your prompts. An agent also runs high-level synthesis and verification, saving time and effort. There’s even an agent for iterative performance tuning and optimization tasks.

This high-level code is synthesized into RTL, and using high-level verification can be up to 1,000X faster than RTL verification. Using this flow enables debug, analysis and exploration more quicky and thoroughly than an RTL approach. System architects get early, executable models to explore architectures. RTL designers can re-use system models, and reach PPA optimization faster. Verification engineers start verifying much earlier and benefit from auto-generated adaptors/transactors. Even the software engineers use the virtual platform for early access to accurate hardware behavior, and their model is always in sync with the hardware.

Mike Fingeroff, Chief of HLS at Rise was up next, showing how the high-level agents work with human-in-the-loo using existing pre-trained LLM’s plus a specialized knowledge base. The pre-trained LLM’s eliminate the need for any sensitive RTL training data. HLS converts SystemVerilog, C++ or SystemC into synthesizable RTL, inserts pipeline registers to meet timing, adds dataflow control, enables exploration and even infers memories.

 

Their HLS synthesis creates RTL, uses constraints for power, performance and area while optimizing with a technology aware library. Here’s the architecture that Rise has:

Alan Klinck, Co-founder of Rise DA, talked about agent-based generative AI for hardware design in three parts:

  • Rise Advisor – prompts used for tasks and questions, expert assistance, accelerate code development
  • Rise HLS Agent
  • Rise Optimization Agent

For an example case he showed us hls4ml, a Python package for machine learning inference in FPGAs.

A live prompt typed was, “How do I make my HLS design synthesizable?”

The Rise system is modular, so you can even use your own language models. Their knowledge base plus language models reduces hallucinations, improving the quality of results. The language models can run on-premises or in the cloud, your choice, and there is no training going on with your tool usage. Using a consumer GPU like the NVIDIA 4090 is sufficiently powerful to run their tools. You go from HLS to RTL typically in seconds to minutes, so it’s very fast.

For the live demo they used the Visual Studio Code  tool on a loosely timed design with some predefined prompts, and as they asked questions and made prompts we saw newly generated code, ready for HLS. Trade-offs between multiple implementations was quickly generated for comparisons to find the optimal architecture.

Summary

I was impressed with an EDA vendor running their tool flow live during a webinar, instead of pre-canned screenshots. If you’ve considered improving your familiar RTL-based design flow with something newer and more powerful, then it’s likely time to look at what Rise DA has to offer. The engineering team has decades of experience in EDA and HLS, plus they’ve added generative AI to the flow that now benefits both design and verification engineers.

View the replay of their webinar online.

Related Blogs


Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next

Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next
by Admin on 04-10-2025 at 6:00 am

image chipflow keysom

by, Tomi Rantakari CEO ChipFlow & Luca Testa COO Keysom

The automotive industry is undergoing a major transformation, driven by electrification, the rise of new market players, and the rapid adoption of emerging technologies such as AI. Among the most significant advancements is the growing adoption of RISC-V, an open-standard instruction set architecture (ISA) that is reshaping how automotive chips are designed and deployed. Companies like Keysom (keysom.io) and Chipflow (chipflow.io) are playing a crucial role in this shift, helping to navigate the opportunities and challenges of RISC-V in the automotive sector.

Progress in RISC-V Adoption

RISC-V offers a compelling alternative to proprietary architectures, providing flexibility, cost efficiency, and a growing ecosystem of development tools. In automotive applications, its modularity allows manufacturers to tailor processors for specific workloads, optimizing performance and power efficiency. With the increasing complexity of software-defined vehicles and real-time processing requirements, RISC-V’s open ecosystem fosters innovation by enabling chip customization at all levels.

Chipflow and Keysom have been actively contributing to this progress, focusing on design automation, tooling, and reference architectures and IPs that accelerate the adoption of RISC-V-based solutions in automotive systems. Both companies help reduce barriers to entry, ensuring a smoother transition for automotive players looking to integrate RISC-V into their next-generation platforms.

Challenges in Automotive Integration

Despite its advantages, RISC-V faces several challenges in automotive applications. The industry’s strict safety and reliability requirements necessitate extensive validation and compliance with standards such as ISO26262.

Additionally, transitioning from well-established proprietary architecture requires significant investment in software development, ecosystem support, and integration with existing automotive workflows.

Keysom and Chipflow recognize these challenges and are working to address them through industry collaborations, improved IP customization, toolchains, and enhanced verification methodologies. Developing a robust software ecosystem, and functional safety-certified toolchains, remains a top priority to ensure that RISC-V can meet the demands of mission-critical automotive applications.

The Role of the European Chips Act

One of the key drivers of RISC-V adoption in Europe is the European Chips Act. This initiative aims to enhance Europe’s semiconductor capabilities, ensuring supply chain resilience and fostering innovation in critical technologies, including open-source processor architectures like RISC-V.

Under the “Chips for Europe Initiative,” significant investments are being made to support research, pilot production, and the development of next-generation semiconductor technologies. This initiative aligns with the vision of Keysom and Chipflow, both of which are committed to advancing open hardware solutions that provide greater flexibility and sovereignty in automotive chip design. By leveraging the funding and infrastructure provided by the Chips Act, both companies are strengthening their efforts to build scalable, high-performance RISC-V solutions tailored for automotive applications.

What’s Next for RISC-V in Automotive?

The future of RISC-V in the automotive industry looks promising, with increasing investments, growing industry adoption, and expanding ecosystem support. As industry moves toward software-defined vehicles, the demand for flexible, customizable hardware will continue to rise.

Keysom and Chipflow remain at the forefront of this evolution, driving advancements in RISC-V-based automotive processors. By focusing on design automation, open-source collaboration, and compliance with industry standards, both companies are helping pave the way for a more open and innovative automotive semiconductor landscape.

As RISC-V matures and gains traction in safety-critical applications, collaboration and ecosystem growth will be essential. The path toward a more open, customizable, and efficient automotive hardware future is becoming increasingly clear. The next few years will be crucial in shaping the role of RISC-V in vehicles, and Keysom and Chipflow are committed to contribute to this transformation.

Also Read:

CEO Interview with Cyril Sagonero of Keysom


Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON

Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON
by Daniel Nenni on 04-09-2025 at 6:00 am

Dan is joined by Marc Evans, director of business development and technology at Andes. Marc has over twenty years of experience in the use of CPU, DSP, and Specialized IP in SoCs from his prior positions at Lattice Semiconductor, Ceva, and Tensilica. During his early career, Marc was a processor architect, making significant contributions to multiple microprocessors and systems at Tensilica, HP, Rambus, Rise Technology, and Amdahl.

Dan discusses the significant focus of Andes on commercialization of RISC-V with Marc. He also discusses the upcoming Andes RISC-V CON. Marc explains that Andes does this conference at a time that doesn’t conflict with key events done by RISC-V International. The idea is to provide a broad coverage of the RISC-V movement throughout the year. Marc provides an overview of the event, which includes engineering-level presentations along with partner contributions, strategic presentations and a new Developer’s Track, which provides a deep dive into key areas. There are also fireside chats, high-profile talks from industry luminaries and the opportunity to network with conference attendees.

The event will be held at the DoubleTree Hotel in San Jose on April 29, 2025. You can learn more about the conference and register to attend for free here. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Pierre Laboisse of Aledia
by Daniel Nenni on 04-08-2025 at 10:00 am

Pierre Aledia Headshot

With over 25 years of international experience in the high-tech sector, Pierre Laboisse now leads Aledia with strategic expertise. Before Aledia, he made significant contributions at Infineon, NXP, and ams OSRAM. Having served on the boards of KeyLemon and 7 Sensing Software, he demonstrates solid expertise in corporate strategy and execution. Pierre is renowned for his results-driven leadership and commitment to innovation in the microLED sector.

Pierre is leading Aledia with a vision for scaling its innovative technology to transform the global display market. Pierre brings extensive leadership experience in high-tech industries and a proven track record of driving business growth.

Tell us about your company:

Aledia is a French deep-tech technology company specialized in advanced display solutions. Originating from CEA’s research labs in 2011, it focuses on developing unique microLED technology for next-generation displays, including applications in consumer products, automotive and augmented reality. Aledia’s proprietary technology uses a new class of 3D nanowire LEDs grown on silicon, offering significant advantages in brightness, energy efficiency, and cost compared to traditional 2D microLED technologies.

What problems are you solving?

AR and consumer products simultaneously require exceptionally high brightness, energy-efficient, compact and cost-effective displays. No technology in mass production today can meet all these criteria at the same time. 3D microLED is the only viable solution to fill the requirements of this future generation of displays.

For AR in particular, tech giants are accelerating efforts in microLED technology for smart glasses, aiming for commercial launches by 2027, but hardware challenges like power consumption, bulkiness, and manufacturing costs still hinder mass adoption. After 12 years of R&D, nearly 300 patents, and $600 million in investment, Aledia has overcome the toughest hardware challenges, paving the way for the most immersive, AI-powered AR vision experiences ever conceived.

We address these needs through a proprietary process that grows GaN nanowires on standard silicon wafers, enabling 3D microLEDs with high performance and potential for lower-cost production. This combination of performance and scalability makes it more efficient and cost-effective for accelerating next-generation displays into everyday devices.

What application areas are your strongest?

We’re particularly focused on augmented reality, where top-tier display performance in a very small, power-efficient footprint is a must. Our nanowire-based microLEDs are designed to deliver high brightness and efficiency, even in challenging lighting conditions, while fitting into compact form factors.

Additional applications include consumer products (smartwatches and smartphones), automotive (dashboards and head up displays) and TVs. In fact, by having the world’s smallest, most efficient LED on 8 inches of silicon wafer, we have a technology that can lower microLED production costs. This is a very important issue for these applications, where price competition with OLED and LCD technologies is very strong.

What keeps your customers up at night?

To achieve mass adoption of augmented reality and other truly immersive experiences, our customers need displays that can handle bright outdoor settings, operate efficiently on limited power, and fit into lightweight, user-friendly designs. They want solutions that move beyond conceptual demonstrations and deliver meaningful, everyday utility. We’re honing our microLED approach so that when these products hit the market, they deliver the kind of seamless, real-world experience users genuinely value.

What does the competitive landscape look like and how do you differentiate?

There are currently players in 2D microLED technology, but these are still very expensive and not suitable for mass production. Aledia’s advantage lies in its over $200 million in-house pilot production line at the center of Europe’s “Display Valley,” enabling faster iteration without initial volume constraint. By utilizing semiconductor-grade silicon in 8-inch and 12-inch formats, Aledia lowers production costs for large-scale production of microLEDs, accelerating widespread adoption in a wide range of displays.

What new features/technology are you working on?

After 12 years of relentless R&D, a portfolio of nearly 300 patents and $600 million in investment, we’re now focused on bringing our microLEDs to market to serve customers at the cutting edge of innovation in AR, automotive, wearables and electronics. Aledia is ready and able to support customer demand ramp up to nearly 5,000 wafer starts per week.

How do customers normally engage with your company?

Customers typically collaborate with us early in the design process to integrate our microLEDs into their next-generation devices. Our in-house pilot production line allows us to accelerate development and adapt to their needs quickly. This hands-on approach ensures we’re solving the right problems and delivering impactful solutions.

Contact Aledia

Also Read:

CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Matthew Stephens of Impact Nano

CEO Interview with Dr Greg Law of Undo


Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery

Stitched Multi-Patterning for Minimum Pitch Metal in DRAM Periphery
by Fred Chen on 04-08-2025 at 6:00 am

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

In a DRAM chip, the memory array contains features which are the most densely packed, but at least they are regularly arranged. Outside the array, the regularity is lost, but in the most difficult cases, the pitches can still be comparable with those within the array, though generally larger. Such features include the lowest metal lines in the periphery for the sense amplifier (SA) and sub-wordline driver (SWD) circuits.

A key challenge is that these lines are meandering in appearance, and the pitch is varying over a range; the local max/min pitch ratio can range from ~1.4 to 2. These pitches have different focus windows [1], and in EUV lithography, these windows may be separated by more than the resist thickness [2].

Pitch uniformity within a single exposure can be attained if the layout is split accordingly for double patterning with stitching [3,4] (Figure 1). The layout is dissected into stripes of alternating color, each color assigned to one of two exposures. Features may cross stripe boundaries; in that case, the two exposures need to stitch correctly at the boundaries.

Figure 1. Splitting a metal layout for stitched double patterning for better pitch uniformity.

Alternatively, some features like diagonals may be forbidden to be stitched, resulting in a different layout split (Figure 2).

Figure 2. Alternatively splitting a metal layout for stitched double patterning for better pitch uniformity, avoiding stitching of diagonal features.

For minimum pitches above 40 nm, we expect double patterning to be sufficient with ArF immersion lithography. In case the minimum pitch is lower than this, triple patterning may be used (Figure 3) with ArF immersion lithography, as an alternative to EUV double patterning.

Figure 3. Parsing the layout of Figure 1 for triple patterning including stitching.

Previously, quadruple patterning was suggested for where the minimum line pitch is less than 40 nm [1], but it turns out triple patterning may suffice (Figure 4).

Figure 4. A quadruple patterning arrangement [1] (left) can be rearranged to support triple patterning with stitching (center) or possibly even avoid stitching (right).
In some special cases, a multiple spacer approach may be able to produce the periphery metal pattern with islands and bends with only one mask exposure [5]. However, the stitched double patterning has been the default choice for a very long time [3,4]; it should be expected to kept that way for as long as possible, even through the mid-teen nm DRAM nodes [6].

References

[1] F. Chen, Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM.

[2] A. Erdmann et al., J. Micro/Nanolith. MEMS MOEMS 15, 021205 (2016); E. van Setten et al., 2012 International Symposium on EUV Lithography.

[3] Y. Kohira et al., Proc. SPIE 9053, 90530T (2014).

[4] S-Min Kim et al., Proc. SPIE 6520, 65200H (2007).

[5] F. Chen, Triple Spacer Patterning for DRAM Periphery Metal.

[6] C-M. Lim, Proc. SPIE 11854, 118540W (2021).

Thanks for reading Multiple Patterns! Subscribe for free to receive new posts and support my work. Pledge your support.

Also Read:

A Perfect Storm for EUV Lithography

Variable Cell Height Track Pitch Scaling Beyond Lithography

A Realistic Electron Blur Function Shape for EUV Resist Modeling

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution


Alphawave Semi is in Play!

Alphawave Semi is in Play!
by Daniel Nenni on 04-07-2025 at 10:00 am

Awave Social Post Image

We started working with Alphawave at the end of 2020 with a CEO Interview. I had met Tony Pialis before and found him to be a brilliant and charismatic leader so I knew it would be a great collaboration. Tony was already an IP legend after his company was acquired by Intel. After 4+ years at Intel Tony co-founded Alphawave in 2017. Today, Alphawave Semi is a global leader in high-speed connectivity and compute silicon for the world’s technology infrastructure. They are a publicly on the London Stock Exchange (LSE) with a recent stock price spike for obvious reasons. I have nothing but great things to say about Alphawave, absolutely.

When I first read about the exclusive OEM deal between Siemens and Alphawave I immediately thought: Why was this not an acquisition?

Siemens to accelerate customer time to market with advanced silicon IP through new Alphawave Semi partnership

“Siemens Digital Industries Software is a key and trusted partner for AI and hyperscaler developers, and our agreement simplifies and speeds the process of developing SoCs for these, and other leading-edge technologies, to incorporate Alphawave Semi’s IP,” said Tony Pialis, president and CEO, Alphawave Semi. “Our technologies play a critical role in reducing interconnect bottlenecks and this collaboration greatly expands our customer reach, allowing more companies to deliver next-level data processing.”

Ever since Siemens acquired Mentor Graphics in 2017 for $4.5B they have implemented an aggressive acquisition strategy acquiring dozens of companies. We track them on the EDA Merger and Acquisition Wiki. During the day I help emerging companies with exits so I have M&A experience with the big EDA companies including Siemens. They all do it differently but I can tell you the Siemens M&A team is VERY good and they do not take being out bid lightly. I was with Solido Design and Fractal when they were acquired by Siemens and I was with Tanner EDA and Berkely Design when they were acquired by Mentor Graphics. It was a night versus day difference for the M&A processes.

The only answer I came up with as to why it was on OEM agreement versus an outright acquisition was price, Tony wanted more money. Purely speculation on my part but now that I have read that both Qualcomm and Arm might be interested in acquiring Alphawave it is more than speculation.

Qualcomm considers buying UK semiconductor firm Alphawave

Exclusive-Arm recently sought to acquire Alphawave for AI chip tech, sources say

Given that Alphawave has a lot of Intel experience inside and Lip-Bu Tan knows the importance of IP and ASIC design, maybe Intel will make an offer as well?

To be clear, Alphawave is not just an IP licensing company, they also do Chiplets and custom ASICs. In 2022 they acquired SiFive’s ASIC business for $210M. This was the former Open-Silicon that SiFive acquired in 2018 for an undisclosed amount. I worked with both Open-Silicon and eSilicon at the time. The ASIC business is a whole lot easier if you have the best IP money can buy and Alphawave does.

I remember trying to talk Mentor into entering the IP business 15 years ago but they would not budge, which was clearly a mistake. Synopsys and Cadence leverage their IP for EDA business and that puts Siemens EDA at a distinct disadvantage, thus the OEM agreement with Alphawave. IP is a big driver in the semiconductor ecosystem, just ask TSMC.

Also, Broadcom has an ASIC business, Marvell has an ASIC business, MediaTek has an ASIC business, Qualcomm does not have an ASIC business, Arm does not have an ASIC business, there is more than IP in play here.

I would bet that Siemens has some kind of Right of First Refusal tied to the OEM agreement and Tony getting offers from Qualcomm and Arm will pull that trigger. I really do not see Siemens having much of a choice but to pay a premium for Alphawave.

Exciting times in the semiconductor industry!

Also Read:

Podcast EP276: How Alphawave Semi is Fueling the Next Generation of AI Systems with Letizia Giuliano

Scaling AI Data Centers: The Role of Chiplets and Connectivity

How AI is Redefining Data Center Infrastructure: Key Innovations for the Future