100X800 Banner (1)

CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Ronald Glibbery of Peraso
by Daniel Nenni on 04-15-2025 at 10:00 am

Ron Glibbery high res

Mr. Glibbery leads all functional areas of Peraso Inc. and has served as chief executive officer since December 2021. He co-founded Peraso Technologies Inc. in 2009 and previously served as its chief executive officer. Prior to co-founding Peraso Technologies, Mr. Glibbery was President of Intellon, a pioneer and leader in the development of semiconductor devices used for powerline communications. Previously, Mr. Glibbery was a member of the management team of LSI Logic, Canada.

Tell us about your company?

Peraso Inc. (“Peraso”) is a global leader in the development and high-volume deployment of semiconductor solutions for the unlicensed 60 GHz (mmWave) spectrum. With a focus on high-performance, scalable wireless technologies, Peraso serves a diverse range of markets, including fixed wireless access (FWA), aerospace and defense, transportation communications, and professional video delivery.

What problems are you solving?
    • Providing affordable, reliable, connectivity in challenging urban environment
      • A major benefit of mmwave technology is that it utilizes beamforming, or the ability to focus the radio energy into a narrow beam. Thus many mmwave networks can coexist in a dense user environment because adjacent beams do not interfere with each other. This is in stark contrast to traditional wireless technology where there is substantial overlap between adjacent networks, thus rendering traditional wireless technology unsuited for dense deployments.
    • Meeting performance of Wired/Fiber infrastructure with better physical security and lower cost
      • The Peraso 60GHz technology is able to operate at data rates of 3Gbps, competing favorably with the premium data rate of 1Gbps offered by fiber operators. Further, due to the utilization of beamforming technology, 3rd party snooping is very difficult, thus providing carriers with a fundamental level of security at the physical layer.
    • Open to any operator without spectrum acquisition or costly 4G/5G equipment
      • The 60GHz spectrum does not require a license to operate, and therefore operations do not need to utilize the significant capital required for licensed bands. This is a valid deployment model, as beamforming enables the use of simultaneous transmissions in a common environment.
    • Overcoming congestion in Sub-6 GHz Wi-Fi networks
      • Same as item 1
    • Maintaining service during frequent power outages
      • A major advantage Peraso provides is the ability to operate with relatively modest power consumption. In many or our jurisdictions, the electrical power grid is unreliable, and as such, many of our customers use batteries in conjunction with solar cells without relaying on the electrical power grid.
What application areas are your strongest?

mmWave:

Peraso Inc. is a global leader in the development and high-volume deployment of semiconductor solutions for the unlicensed 60 GHz (mmWave) spectrum.

Tactical Communications:

We have also secured significant interest and traction in the defense / tactical communications space as a result of our technologies inherently Stealthy protocol, Low Probability of Interception (LPI), ease of deployment and ability to deliver gigabit speeds to all platforms.

Transportation:

Peraso has demonstrated the benefits of mmWave radio systems across several transportation platforms. Most recently, Peraso has developed a High Velocity Roaming (HVR) operating mode for its 60 GHz modules. HVR ensures that data terminals located along the rail side are able to track the fast-moving train and that the train’s terminals are able to seamless switch between connection points in a make-before-break sequence. HVR also provides system scalability. Each channel provides up to 3 Gbps traffic bandwidth and multiple channels can be aggregated for scalability. Peraso’s HVR can provide riders with faster, more reliable data which will meet foreseeable future demands.

What keeps your customers up at night?

Ultimately our customers are the service providers who provide broadband internet services. Their number one concern is making sure their customers have reliable internet service. Peraso provides critical technology to help their concerns. This includes primary features such as high reliability in rain or shine, low power operation, unlikely interference, and high security.

What does the competitive landscape look like and how do you differentiate?

Peraso is the only semiconductor supplier that provides OEM’s with a complete 60GHz system solution. This includes RF, signal processing, comprehensive software support and phased array antenna technology.

What new features/technology are you working on?
    • Reduced power consumption
    • AP-AP roaming
    • Increased user support
    • Broader antenna technology
How do customers normally engage with your company?

New partners and existing customers wishing to engage with us are encouraged to visit our website at https://perasoinc.com/contact-us/.

Also Read:

CEO Interview with Pierre Laboisse of Aledia

CEO Interview with Cyril Sagonero of Keysom

CEO Interview with Matthew Stephens of Impact Nano


SNUG 2025: A Watershed Moment for EDA – Part 1

SNUG 2025: A Watershed Moment for EDA – Part 1
by Lauro Rizzatti on 04-15-2025 at 6:00 am

SNUG 2025 A Watershed Moment for EDA Figure 1

Hot on the heels of DVConUS 2025, the 35th annual Synopsys User Group (SNUG) Conference made its mark as a defining moment in the evolution of Synopsys—and the broader electronic design automation (EDA) industry. This year’s milestone event not only underscored Synopsys’ continued innovation but also affirmed the vision and direction of its new leadership.

The conference opened with a keynote address from Synopsys President and CEO Sassine Ghazi, setting the tone for two packed days of technical exploration. Attendees had access to 10 parallel tracks covering everything from analog and digital design to system-level innovations, with real-world case studies across IP, SoCs, and emerging system-of-systems built on cutting-edge multi-die architectures.

Highlights included a Fireside Chat, a thought-provoking panel discussion, and another keynote now delivered by Richard Ho, Head of Hardware, OpenAI, titled “Scaling Compute for the Age of Intelligence,” each drawing a full house of users, Synopsys staff, media, and industry influencers.

Figure 1: Richard Ho, Head of Hardware, OpenAI

The exhibit hall buzzed with energy as users connected directly with Synopsys engineers and executives, exchanging insights and exploring the latest tools and solutions.

Keynote by Sassine Ghazi, Synopsys’ President and CEO: Re-engineering Engineering

Sassine set the stage by highlighting the arrival of a new era—one defined by pervasive intelligence. This shift promises to deliver unprecedented innovation and disruption, fueled by a surge of AI-powered products built on advanced AI silicon. To realize this future, there is a need for highly efficient silicon, which in turn demands a rethinking of how to engineer systems. In Sassine’s words, we are entering a time when we must “re-engineer engineering itself.”

Designing AI-centric silicon must turn the traditional design process on its head. Instead of developing computing hardware in isolation and testing it against software workload only during the validation stage, the new approach puts the software workload on the driving seat, using it from the outset to shape the processing hardware architecture.

This new paradigm requires deep collaboration across the ecosystem, including partnerships with established semiconductor leaders and cutting-edge startups alike. Underscoring the importance of such collaboration, Sassine welcomed Microsoft CEO Satya Nadella to join him remotely on stage to share his vision for the road ahead. Nadella described the moment as “Moore’s Law on hyperdrive,” with scaling laws accelerating across multiple S-curves. Hardware and software complexities are now growing in tandem aiming at delivering super-fast performance while consuming less energy at reduced unit costs.

He also outlined the critical and evolving role of AI in the design process, describing it as a journey through three distinct phases. In the beginning, engineers asked questions and executed tasks manually. Today, we’re in the second phase where engineers issue instructions and AI handles the execution, though human oversight remains crucial. In the next phase, AI will take on a more autonomous role, making design decisions to generate high-quality, well-optimized products. But even as abstraction levels rise, the role of the engineer will not disappear. A deep understanding of systems will remain essential to guiding and validating AI-driven products.

One of the most significant developments over the past year, Nadella noted, has been the growing need for AI models to reason—not just execute. While many pre-trained models are quite capable, true progress lies in teaching these models to reason effectively for specific tasks. In silicon design, that means enabling AI to make smart trade-offs to optimize power, performance, and area. It’s not just about what the model knows—it’s about how it reasons through complexity to produce better engineering solutions.

Sassine went on to address the escalating complexity of chip design, emphasizing the challenge of building systems with hundreds of billions—or even trillions—of transistors using angstrom-scale process technologies. These designs are increasingly implemented across multiple dies integrated into a single package, while compressing development timelines—from the traditional 18-month tapeout cycle down to 16, 12 months, or even less—to deliver highly customized silicon for next-generation intelligent systems.

Sassine explained that tackling this complexity demands technological evolution across six key dimensions of the design process:

  1. 3D IC Packaging – Leveraging multi-die systems built on different process technologies and sourced from multiple foundries is essential for efficiently mapping trillions of transistors.
  2. Innovative IP Interfaces – High-performance, power-optimized communication between chiplets in multi-die assemblies is critical to meeting system-level targets.
  3. Advanced Process Nodes – Progressing into the angstrom era requires entirely new approaches to scaling and integration.
  4. Next-Generation Verification and Validation – Cutting-edge techniques are needed to enable effective hardware/software co-design and support shift-left methodologies.
  5. Silicon Lifecycle Management (SLM) – With schedules tightening, verification must extend from pre-silicon to post-silicon and continue into in-field testing to ensure ongoing quality and performance.
  6. Holistic EDA Methodologies – Tools must now encompass the full stack—from front-end to back-end, assembly to packaging—bridging both the abstract and physical domains, and extending beyond electronics to account for thermal, mechanical, structural, and fluidic challenges.

Sassine then turned to how Synopsys has been embedding AI into its tool suite to address these growing demands and transform the design process.

The journey began in 2017 with Synopsys.ai through the pioneering use of reinforcement learning in physical implementation, introducing DSO.ai to work collaboratively with Fusion Compiler. The goal was to optimize an enormous design space with countless inputs to deliver the best possible PPA—power, performance, and area—in the shortest time.

That was followed by the data continuum—a framework that connects insights across the design and manufacturing lifecycle. Design.da, Fab.da, and Silicon.da, used analytics to inform what happens at each successive stage of the flow. In 2018 a working prototype delivered fantastic results on real customer designs, but it was also met with a mix of skepticism and confusion. Customers weren’t sure how to integrate this technology into workflows that had been optimized over decades. Engineers being often skeptical of new methods do not embrace changes easily. But today, Synopsys.ai has become essential for achieving the level of productivity and quality needed to keep pace with growing complexity.

Generative AI opened up new opportunities for innovation at Synopsys. Generative AI includes two parts: assistive and creative. The assistive side encompasses a co-development with Microsoft to introduce Copilot-style capabilities—workflow assistants, knowledge assistants, debug assistants. These tools help both junior and senior engineers ramp up faster and interact with Synopsys software in a more modern, efficient way. The creative side supports tasks like RTL generation, testbench creation, documentation, and test assertions. Here is where Copilot not only assists but also creates. The productivity gains are game-changing, compressing tasks that once took days into just minutes.

As AI continues to evolve, so too is the design workflow. The rise of agentic AI has sparked the vision of agent engineers—AI collaborators that will alongside human engineers to manage complexity and reshape the design flow itself. This is where Synopsys is investing in partnerships with leaders like Microsoft, NVIDIA, and others to develop domain-specific agents tailored for the semiconductor industry.

At this stage, Sassine summarized the adoption of AI from the beginning into the future via a roadmap charting on the X-axis the evolution from copilot to autopilot and on the Y-axis the cumulative capabilities—from generative AI to agentic AI—layered step by step. See figure 2.

Figure 2: The path of AgentEngineer (Source: Synopsys)

The initial focus was on assistance. Over the past few years, copilot-style capabilities were embedded in each of Synopys tools with copilots powered by domain-specialized LLMs, trained specifically for their respective tasks—whether it’s synthesis, simulation, or verification. This foundational step is to be followed by the adoption of action introduced by agents purpose-built for specific tasks. For example, an agent for RTL generation, another for testbench creation, or one focused on generating test assertions. These task-specific agents will continually improve as they learn from real-world designs and the unique environments of each customer.

As the technology matures, we move into the orchestration phase: coordinating multiple agents to work together seamlessly. From there, we progress to dynamic, adaptive learning—where agents begin optimizing themselves based on unique workflow and design context.

Initially, agents will operate within existing workflows. But as orchestration and planning capabilities advance, the workflows themselves will begin to evolve. The ultimately goal is to build a framework where agentic systems can autonomously act and make decisions—on a block, a subsystem, or even an entire chip—driving toward a future of intelligent, self-directed design.

To conclude, Sassine drew a parallel to the levels of autonomous driving—from L1 to L5—where the progression goes from human-monitored systems to fully autonomous vehicles. In the early stages (L1-L2), the driver remains in control, while higher levels shift responsibility to the system itself. The adoption of AI agents in engineering is going to follow a similar framework—a path from today’s assistive copilots to future autonomous, multi-agent design systems. See figure 3.

Figure 3: adoption of AI agents

At Level 1, todays’ copilots are AI assistants embedded within Synopsys tools that support engineers with tasks like file creation or code generation using large language models. They help, but they don’t act on their own.

Level 2 introduces task-specific action. Here, agents can begin executing defined tasks within a controlled scope. For instance, an agent could fix a link error, resolve a DRC violation, or make other focused adjustments—with the human engineer still actively involved in oversight and decision-making.

At Level 3, the realm of multi-agent orchestration begins. This is where agents collaborate across domains to solve more complex challenges. As examples, signal integrity or timing closure issues that span over multiple parts of the flow require coordination among several specialized agents to achieve resolution.

Level 4 adds planning and adaptive learning. At this stage, agent systems begin to assess the quality of their own outputs, refine the flow, and adapt over time. The workflow itself begins to evolve—moving beyond the static, predefined flows of earlier levels.

Finally, Level 5 is what we consider true autopilot. Here, a fully autonomous multi-agent system can reason, plan, and act independently across the entire design process. It has the intelligence and decision-making capability to achieve high-level goals with minimal human input.

Today, Synopsys is actively operating at Levels 1 and 2, with a growing number of real-world engagements across customers. These systems are assisting and acting in limited scopes, and are continuously enhanced. Just like in autonomous driving, reaching L3 or L4 doesn’t mean abandonment of L1 or L2. Each level builds upon the last—constantly evolving and coexisting as the technology matures.

In wrapping up, Sassine returned to two key ideas: the need to re-engineer engineering, and the rise of agent engineers working in tandem with human engineers. Together, they will drive the workflow transformation required to meet the scale, complexity, and speed of what lies ahead.

Also Read:

Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures


Design IP Market Increased by All-time-high: 20% in 2024!

Design IP Market Increased by All-time-high: 20% in 2024!
by Eric Esteve on 04-14-2025 at 10:00 am

Top5 License

Design IP revenues achieved $8.5B in 2024 and this is an all-time-high growth of 20%. Wired Interface is still driving Design IP growth with 23.5% but we see the Processor category also growing by 22.4% in 2024. This is consistent with the Top 4 IP companies made of ARM (mostly focused on processor) and a team leading wired interface category, Synopsys, Cadence and, Alphawave. The top 4 vendors are even growing more than the market (more in the 25% growth range) and represent a total of 75% in 2024 compared to 72% share in 2023.

Their preferred target is mobile computing for ARM and High Performance Computing (HPC) applications for the #2, #3 and #4 IP companies. The preferred IP for HPC segment are based on interconnect protocols like PCIe and CXL, Ethernet and SerDes, Chip to Chip (UCIe) and DDR memory controller including HBM. Let’s add that they position advanced solutions (technology node) vendors able to catch the needs of AI hyperscaler developers, even if Synopsys also target the main market and de facto enjoy larger revenues.

IPnest has released the “Design IP Report” in April 2025, ranking IP vendors by category and by nature, license and royalty.

How can the Design IP market in 2024 be consistent with the semiconductor market behavior? Looking at TSMC revenues by platform in Q42024, we see that HPC at 53%, smartphone 35%, IoT 5%, automotive 4%, others 3%. By platform, revenue from HPC, Smartphone, IoT, Automotive and DCE increased 58%, 23%, 2%, 4%, and 2% respectively from 2023, while Others decreased.

In 2024, the IP market was strongly driven by vendors supporting HPC applications selling wired interface (Synopsys, Cadence, Alphawave and Rambus) but also by vendors selling CPU and GPU for smartphone (ARM and Imagination Technology). The IP market perfectly mimics the semiconductor market, most of the year to year growth is coming from a single segment, HPC (even if ARM’s performance is to be noticed with 26% YoY growth).

Looking at the 2016-2024 IP market evolution can bring interesting information about the main trends. Global IP market has grown by 145% when Top 3 vendors have seen unequal growth. The #1 ARM grew by 124% when the #2 Synopsys grew by 326% and Cadence #3 by 321%.

Market share information is even more significant. ARM moved from 48.1% in 2016 to 44% in 2024 when Synopsys enjoy a growth from 13.1% to 23%.

This can be synthetized with the comparison of 2016 to 2024 CAGR:

      • Synopsys CAGR          19%
      • Cadence CAGR            19%
      • ARM CAGR                     9%

IPnest has also calculated IP vendors ranking by License and royalty IP revenues:

Synopsys is the clear #1 by IP license revenues with 32% market share in 2024, when ARM is #2 with 30%.

Alphawave, created in 2017, is now ranked #4 just behind Cadence, showing how high performance SerDes IP is essential for modern data-centric application and to build performant interconnect IP portfolio supporting growth from 0 to over $270 million in 7 years. Reminder: “Don’t mess with SerDes!”

Eric Esteve from IPnest

To buy this report, or just discuss about IP, contact Eric Esteve (eric.esteve@ip-nest.com)

Also Read:

Balancing the Demands of OTP for Advanced Nodes with Synopsys IP

Alphawave Semi is in Play!

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation


Balancing the Demands of OTP for Advanced Nodes with Synopsys IP

Balancing the Demands of OTP for Advanced Nodes with Synopsys IP
by Mike Gianfagna on 04-14-2025 at 6:00 am

Balancing the Demands of OTP for Advanced Nodes with Synopsys IP

One-time programmable (OTP) non-volatile memory has been around for a long time. Compared to other non-volatile memory technologies OTP has a smaller footprint and does not require additional manufacturing steps, making it a popular choice to store items such as boot code and encryption keys. While this sounds simple, the growth of ubiquitous AI deployment and the associated demand for more advanced technology make balancing the demands of OTP quite challenging.

These devices play a critical role to securely store data, sensitive program code, product information, and encryption keys for authentication. The devices must operate reliably to achieve a successful chip, and the spiraling cost of new technologies makes the stakes quite high. But advanced nodes present many challenges to ensuring OTP memories work reliably. Getting it right can become a substantial balancing act. The good news is there is a path to balancing the demands of OTP for advanced nodes with Synopsys IP.

What’s at Stake?

Spiraling design, mask, and wafer costs in advanced FinFET nodes make achieving first-pass success more important than ever. And reliable IP operation, and in particular the critical functions that OTP IP enable are directly on the path to success. The figure below illustrates how fast these costs are mounting for advanced technologies.

Spiraling design costs

OTP IP delivers critical data that unlocks the functions of advanced, AI-based designs. This data is highly sensitive, so accurate delivery of the information is required. And compromise of information such as encryption keys cannot be tolerated. Against these stringent requirements there are many factors to consider. Let’s look at some of them.

Barriers to Success

Let’s start with the basics. For typical antifuse OTP memory, unprogrammed cells represent a logic value of 0 and programmed cells represent a logic value of 1. When these devices are first manufactured, all cells are unprogrammed and so they are at logic 0. Programming cells involves the application of high voltage to the cells -. The high voltage results in a breakdown of the oxide and formation of a channel or filament, creating a current path that can be measured.

So, reading an OTP requires measuring the gate leakage current to determine if the cell is programmed (logic 1) or unprogrammed (logic 0). This involves using regulated voltages above the core supply voltage to get enough current on the bit lines to reliably read the data.

So far, this sounds straightforward. But technology advances make the process challenging.

Advanced nodes have thinner oxides, so reading at regulated voltages above the core supply voltage makes the OTP more susceptible to leaky bits and misreading bits as logic 1 even when they are not programmed. Thinner oxide also creates higher device stress on unprogrammed cells within the word being read.

In addition, advanced nodes have higher device leakage. This means using higher voltages to drive sufficient current for programming the OTP. These higher voltages can result in damage, leading to programming failures. And due to the thinner oxide, advanced node OTP is more susceptible to high voltages and may become over-programmed. Over programming an OTP can result in poor programming quality and unnecessary over-exposure of the cells to high voltage. To make things worse, high voltage can cause program disturbs, where unintended neighboring cells are accidentally programmed, causing other errors.

There are also a host of PPA challenges to be dealt with. Higher leakage can make it difficult to keep the OTP area competitive and may limit the maximum bit count for reliable operation. Also, the total cost of manufacturing is affected by programming time. Programming the OTP requires a significant increase in voltage. As supply voltages are lower in advanced nodes, it may take longer to ramp up the voltages to drive the programming currents and successfully program the OTP, resulting in more time and cost.

Techniques to Balance Requirements for Success

Here is a subset of the requirements.

A reliable solution starts with bitcell design. The quality of the filament formed during programming depends on how well the oxide is broken, which in turn depends on the bitcell area. If it’s too small, breaking the oxide and forming the filament becomes difficult, leading to programming failures. If it’s too large, multiple breaks in the oxide may occur during programming.  

All of this can create errors in reading critical data. So, the bitcell area must be carefully chosen to optimize the formation of the filament during programming to prevent error conditions, ensuring reliable programmability.

OTPs rely on high voltages for both reading and programming. These voltages are generated and regulated by an analog integrated power supply (IPS). The design of this device is critical for the correct functioning of the OTP as variations in the required voltages will result in data retention issues or errors.

Also, the data output from the OTP during reads must always be reliable. Ensuring the integrity of the data read is crucial. Signals that flag the OTP output as good-to-go are essential to weed out unintended data corruption from voltages that are not stable during reads.

In addition, the high device leakage for advanced nodes requires intervention to ensure not only reliability but also that performance and power targets are met for the OTP. The length of the bitlines and the width of the memory array must be carefully designed to avoid excessive IR drop when the memory is operating.

And finally, optimized analog design is key. For example, the sense-amplifiers must be particularly sensitive to the low voltages typical of advanced nodes to ensure programming speed, which impacts manufacturing cost and needs to be optimized through expert design of the high-voltage circuitry. Achieving this is challenging due to conflicting requirements. That is, the need to minimize the overall area of the OTP while still providing enough current from the charge pump in the IPS to successfully program the memory.

Synopsys Delivers the Solution

Synopsys OTP NVM IP for advanced nodes starts with the design of a robust, optimized anti-fuse bit cell. The design has been proven over high temperature operating life (HTOL) tests. This design balances all the requirements discussed above.

The solution includes a memory array composed of tiled bit cells, decoders, analog components such as sense amplifiers, and an IPS that generates the necessary voltages for reads and programming. The choice of read voltage ensures reliable bit cell reads and guarantees data retention for at least 10 years.

The IP is enhanced with additional bits to insure against random manufacturing defects and field failures. Each word is equipped to correct a leaky bit and/or a programming failure during initial testing. Additional repair resources are available for multiple failures within a word, and entire words can be replaced if necessary. The OTP memory array also includes additional bits for storing error correction codes (ECC).

The IP is available in a wide range of configurations, enabling selection of the optimum options for each application. A controller that manages the reads and writes, test-and-repair, and ECC encoding and decoding is part of the overall solution. The controller is delivered as soft IP in the form of RTL, with the OTP memory array and IPS integrated into a single hard macro.

Substantial security capabilities are also part of the package.

To Learn More

I have really just scratched the surface on the capabilities available from Synopsys for optimized implementation of OTP memory.  Almost every design these days will require some form of OTP to correctly enable operation. It is worth your time to see how Synopsys can help you tame the balancing act required to get your design working reliably at advanced nodes.

An informative article is available here: Achieving Reliable and Secure SoC Designs with Advanced OTP IP.  A comprehensive datasheet for the package is available here.  And you can visit the webpage on Synopsys Non-Volatile Memory IP here.  There are many additional resources there. And that will provide you with what’s needed to understand balancing the demands of OTP for advanced nodes with Synopsys IP.


Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight

Podcast EP283: The evolution of Analog High Frequency Design and the Impact of AI with Matthew Ozalas of Keysight
by Daniel Nenni on 04-11-2025 at 10:00 am

Dan is joined by Matthew Ozalas, a distinguished RF engineer at Keysight Technologies. With extensive experience in RF and microwave engineering, Matthew has made significant contributions to the field, particularly in the design and development of RF power amplifiers. His expertise spans hardware and software applications as well as design and automation.

In this insightful discussion, Dan explores the realm of RF/high frequency design with Mathew, who describes some of the unique requirements of this class of design. The impact of AI is explored as well. Matthew explains how the massive data available needs to be harnessed. The methods he describes are different from mainstream digital design. The strategies to build new, AI-assisted design flows are also explored. Matthew describes the importance of using the same analysis technologies across all phases of the design, from chip to package to system. He also describes the work going on at Keysight to enable novel, new AI-assisted design flows for high frequency design and what the future will look like.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape

Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape
by Daniel Nenni on 04-11-2025 at 6:00 am

fallback image

In today’s rapidly evolving digital landscape, the security of electronic systems is of the highest priority. This importance is underscored by technological advancements and increasing regulatory demands. Multi-die designs which integrate multiple dies (also called chiplets) into a single package, introduce complexity and potential vulnerabilities. These vulnerabilities stem from the varied functions of chiplets and a fragmented supply chain. Addressing these challenges requires a comprehensive approach that encompasses robust security measures at every level of the design and manufacturing process. Attend this Synopsys webinar to learn more about security for multi-die designs.

Regulation and Standardization in Security

Regulations and standards play a crucial role in addressing the security challenges associated with advanced electronic systems, including multi-die designs. Standards organizations define security levels, procedures, and certification tests to ensure chiplet conformance. The security requirements they define must cover individual chiplets, their interconnects, and the overall system, providing a holistic approach to mitigate risks associated with increased complexity.

Additionally, emerging regulations like the Cyber Resilience Act in Europe and the ISO/SAE 21434 standard for automotive cybersecurity are shaping the security landscape. These regulations emphasize the need to design security into systems from the ground up. The Cyber Resilience Act sets clear security requirements for digital products and services throughout their lifecycle, while ISO/SAE 21434 provides a framework for managing cybersecurity risks in the automotive industry, ensuring supply chain protection.

Together, regulations and standards highlight the importance of a proactive security approach. By adhering to these guidelines, organizations can mitigate risks and safeguard their products against emerging threats, ensuring robust security for advanced electronic systems.

Quantum Computing Threats and Post-Quantum Cryptography

Another important driver for progress is the imminent threat posed by quantum computing, which will break current public-key cryptographic algorithms like RSA and ECC. This makes the development of post-quantum cryptography essential. This field focuses on creating algorithms that are resistant to quantum attacks, ensuring long-term security for electronic systems.

Post-quantum cryptography aims to develop algorithms secure against both classical and quantum threats. The first post-quantum cryptographic algorithms have now been standardized by NIST, marking a significant milestone in protecting data as quantum technology advances. Implementing these new algorithms is crucial for maintaining the security of electronic systems against quantum computing threats, ensuring long-term data protection.

Other Advanced Security Solutions

To further facilitate a secure-by-design approach, a range of advanced security solutions can be leveraged. These include Physical Unclonable Functions (PUFs), embedded hardware secure modules, Secure Boot mechanisms, and Secure Interface solutions. Each of these technologies plays a critical role in fortifying multi-die designs against current and future threats.

  • PUFs provide a unique and unclonable identity to each chiplet, making it difficult for attackers to replicate or tamper with the hardware.
  • Embedded hardware secure modules, such as Synopsys’ tRoot, provide a trusted execution environment that can securely manage cryptographic operations and sensitive data.
  • Secure Boot mechanisms ensure that only authenticated and authorized firmware and software are executed on the device, preventing malicious code from being loaded.
  • Secure Interface solutions protect data in transit between chiplets and to other system components, ensuring that communications remain confidential and tamper-proof.
Conclusion

Navigating the complex security landscape of multi-die designs requires a comprehensive and proactive approach. By understanding the importance of security and the drivers behind it, and by leveraging advanced security solutions, it is possible to build robust and secure electronic systems. Standards organizations, emerging regulations, and the advent of quantum computing all play a role in shaping the security domain. By designing security into systems from the ground up and addressing key considerations at every level, we can ensure the protection of our systems and data against current and future threats.

Want to stay ahead of security challenges in multi-die designs? Register for our webinar, How to Approach Security for Multi-Die Designs,” and learn the essential techniques you will need for your next project.

Also Read:

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

DVCon 2025: AI and the Future of Verification Take Center Stage


Generative AI Comes to High-Level Design

Generative AI Comes to High-Level Design
by Daniel Payne on 04-10-2025 at 10:00 am

high level agents min

I’ve watched the EDA industry change the level of design abstraction starting from transistor-level to gate-level, then RTL, and finally using High Level Synthesis (HLS). Another emerging software trend is the use of generative AI to make coding RTL more automated. There’s a new EDA company called Rise Design Automation that enables design and verification beyond RTL, so I attended their recent webinar to learn more about what they have to offer.

Ellie Burns started out with an overview of the semiconductor market and how trends like AI/ML, 5G, IoT and hardware accelerators are driving then landscape. RTL design techniques and IP reuse have done OK, but there is an insatiable demand for new designs and larger designs amidst a general engineering shortage.

What Rise offers is a new, three-prong approach to meet the market challenges:

Their generative AI is an assistant that automatically creates SystemVerilog, C++ or SystemC code based on your prompts. An agent also runs high-level synthesis and verification, saving time and effort. There’s even an agent for iterative performance tuning and optimization tasks.

This high-level code is synthesized into RTL, and using high-level verification can be up to 1,000X faster than RTL verification. Using this flow enables debug, analysis and exploration more quicky and thoroughly than an RTL approach. System architects get early, executable models to explore architectures. RTL designers can re-use system models, and reach PPA optimization faster. Verification engineers start verifying much earlier and benefit from auto-generated adaptors/transactors. Even the software engineers use the virtual platform for early access to accurate hardware behavior, and their model is always in sync with the hardware.

Mike Fingeroff, Chief of HLS at Rise was up next, showing how the high-level agents work with human-in-the-loo using existing pre-trained LLM’s plus a specialized knowledge base. The pre-trained LLM’s eliminate the need for any sensitive RTL training data. HLS converts SystemVerilog, C++ or SystemC into synthesizable RTL, inserts pipeline registers to meet timing, adds dataflow control, enables exploration and even infers memories.

 

Their HLS synthesis creates RTL, uses constraints for power, performance and area while optimizing with a technology aware library. Here’s the architecture that Rise has:

Alan Klinck, Co-founder of Rise DA, talked about agent-based generative AI for hardware design in three parts:

  • Rise Advisor – prompts used for tasks and questions, expert assistance, accelerate code development
  • Rise HLS Agent
  • Rise Optimization Agent

For an example case he showed us hls4ml, a Python package for machine learning inference in FPGAs.

A live prompt typed was, “How do I make my HLS design synthesizable?”

The Rise system is modular, so you can even use your own language models. Their knowledge base plus language models reduces hallucinations, improving the quality of results. The language models can run on-premises or in the cloud, your choice, and there is no training going on with your tool usage. Using a consumer GPU like the NVIDIA 4090 is sufficiently powerful to run their tools. You go from HLS to RTL typically in seconds to minutes, so it’s very fast.

For the live demo they used the Visual Studio Code  tool on a loosely timed design with some predefined prompts, and as they asked questions and made prompts we saw newly generated code, ready for HLS. Trade-offs between multiple implementations was quickly generated for comparisons to find the optimal architecture.

Summary

I was impressed with an EDA vendor running their tool flow live during a webinar, instead of pre-canned screenshots. If you’ve considered improving your familiar RTL-based design flow with something newer and more powerful, then it’s likely time to look at what Rise DA has to offer. The engineering team has decades of experience in EDA and HLS, plus they’ve added generative AI to the flow that now benefits both design and verification engineers.

View the replay of their webinar online.

Related Blogs


Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next

Keysom and Chipflow discuss the Future of RISC-V in Automotive: Progress, Challenges, and What’s Next
by Admin on 04-10-2025 at 6:00 am

image chipflow keysom

by, Tomi Rantakari CEO ChipFlow & Luca Testa COO Keysom

The automotive industry is undergoing a major transformation, driven by electrification, the rise of new market players, and the rapid adoption of emerging technologies such as AI. Among the most significant advancements is the growing adoption of RISC-V, an open-standard instruction set architecture (ISA) that is reshaping how automotive chips are designed and deployed. Companies like Keysom (keysom.io) and Chipflow (chipflow.io) are playing a crucial role in this shift, helping to navigate the opportunities and challenges of RISC-V in the automotive sector.

Progress in RISC-V Adoption

RISC-V offers a compelling alternative to proprietary architectures, providing flexibility, cost efficiency, and a growing ecosystem of development tools. In automotive applications, its modularity allows manufacturers to tailor processors for specific workloads, optimizing performance and power efficiency. With the increasing complexity of software-defined vehicles and real-time processing requirements, RISC-V’s open ecosystem fosters innovation by enabling chip customization at all levels.

Chipflow and Keysom have been actively contributing to this progress, focusing on design automation, tooling, and reference architectures and IPs that accelerate the adoption of RISC-V-based solutions in automotive systems. Both companies help reduce barriers to entry, ensuring a smoother transition for automotive players looking to integrate RISC-V into their next-generation platforms.

Challenges in Automotive Integration

Despite its advantages, RISC-V faces several challenges in automotive applications. The industry’s strict safety and reliability requirements necessitate extensive validation and compliance with standards such as ISO26262.

Additionally, transitioning from well-established proprietary architecture requires significant investment in software development, ecosystem support, and integration with existing automotive workflows.

Keysom and Chipflow recognize these challenges and are working to address them through industry collaborations, improved IP customization, toolchains, and enhanced verification methodologies. Developing a robust software ecosystem, and functional safety-certified toolchains, remains a top priority to ensure that RISC-V can meet the demands of mission-critical automotive applications.

The Role of the European Chips Act

One of the key drivers of RISC-V adoption in Europe is the European Chips Act. This initiative aims to enhance Europe’s semiconductor capabilities, ensuring supply chain resilience and fostering innovation in critical technologies, including open-source processor architectures like RISC-V.

Under the “Chips for Europe Initiative,” significant investments are being made to support research, pilot production, and the development of next-generation semiconductor technologies. This initiative aligns with the vision of Keysom and Chipflow, both of which are committed to advancing open hardware solutions that provide greater flexibility and sovereignty in automotive chip design. By leveraging the funding and infrastructure provided by the Chips Act, both companies are strengthening their efforts to build scalable, high-performance RISC-V solutions tailored for automotive applications.

What’s Next for RISC-V in Automotive?

The future of RISC-V in the automotive industry looks promising, with increasing investments, growing industry adoption, and expanding ecosystem support. As industry moves toward software-defined vehicles, the demand for flexible, customizable hardware will continue to rise.

Keysom and Chipflow remain at the forefront of this evolution, driving advancements in RISC-V-based automotive processors. By focusing on design automation, open-source collaboration, and compliance with industry standards, both companies are helping pave the way for a more open and innovative automotive semiconductor landscape.

As RISC-V matures and gains traction in safety-critical applications, collaboration and ecosystem growth will be essential. The path toward a more open, customizable, and efficient automotive hardware future is becoming increasingly clear. The next few years will be crucial in shaping the role of RISC-V in vehicles, and Keysom and Chipflow are committed to contribute to this transformation.

Also Read:

CEO Interview with Cyril Sagonero of Keysom


Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation

Synopsys Executive Forum: Driving Silicon and Systems Engineering Innovation
by Kalar Rajendiran on 04-09-2025 at 10:00 am

Sassine Keynote (with Satya)

The annual SNUG (Synopsys Users Group) conference, now in its 35th year, once again brought together key stakeholders to showcase accomplishments, discuss challenges, and explore opportunities within the semiconductor and electronics industry. With approximately 2,500 attendees, SNUG 2025 served as a dynamic hub for collaboration and knowledge exchange.

This year, the conference was co-located with the inaugural Synopsys Executive Forum, an exclusive event featuring industry-impacting discussions attended by around 350 executives from Synopsys’ customer base, along with members of the media, analyst community, and investors. The forum facilitated in-depth discussions on the evolving silicon and systems landscape, the role of AI in the future of computing, and strategies to address key industry challenges.

The following summarizes the key insights gathered from various sessions at the event.

Synopsys CEO Sassine Ghazi’s Keynote: Re-Engineering Engineering

As semiconductor designs grow increasingly complex, traditional verification methods are proving insufficient. With quadrillions of cycles requiring validation, new verification paradigms have become critical. The cost and time associated with verification are too significant to rely on “good enough” approaches, making accurate and thorough validation indispensable.

The integration of artificial intelligence (AI) is fundamentally reshaping electronic design automation (EDA) workflows, extending beyond mere efficiency improvements. As chip designs grow in complexity, new methodologies are necessary, and AI-driven workflows offer substantial productivity gains. AI adoption in EDA is evolving from assistance to full automation. Initially, AI acted as a copilot, aiding engineers, but it is now advancing toward specialized AI agents, multi-agent orchestration, and, eventually, autonomous design systems. Just as autonomous driving systems evolved in stages, AI in engineering must undergo a similar maturation process—gradually reducing human intervention while increasing efficiency.

Collaboration among industry leaders is crucial to advancing semiconductor innovation. Partnerships with companies such as NVIDIA, Microsoft, and Arm are essential for driving AI adoption in semiconductor design. No single company can tackle the growing complexity of semiconductor engineering alone, making collaboration key to accelerating innovation and developing solutions that keep pace with the industry’s rapid evolution.

Looking ahead, AI-driven “agent engineers” will work alongside human engineers—initially assisting them, but eventually making autonomous decisions in chip design. This shift will redefine the engineering process, unlocking new levels of efficiency and innovation while enabling more advanced and agile semiconductor development.

The impact of semiconductor advancements extends far beyond the industry itself, profoundly influencing how people live, work, and interact with technology—shaping the future of society. Synopsys’ mission is to empower innovators to drive human advancement.

A special highlight of the keynote session was the virtual guest appearance of Satya Nadella, CEO and Chairman of Microsoft. Nadella emphasized several AI milestones from a software perspective: the evolution from code completion to chat-based interactions, and now the emergence of Agentic AI, where AI agents autonomously execute high-level human instructions. This development has profound business implications, as discussed during the Fireside Chat session.

Fireside Chat: Sustainable Computing – Silicon to Systems

Synopsys CEO Sassine Ghazi and Arm CEO Rene Haas engaged in a thought-provoking discussion on sustainable computing and its implications for the future.

Emerging AI business models are beginning to disrupt traditional platforms and industries. AI-driven agents have the potential to automate user interactions, posing challenges for businesses and platforms that rely on conventional engagement and ad-based revenue models. As AI bypasses traditional interaction strategies, entire sectors could be reshaped. Companies that swiftly adapt to new AI technologies and business models will position themselves as industry leaders.

The Stargate initiative, a collaboration between OpenAI, SoftBank, Oracle, and Microsoft, aims to build massive data centers across the U.S. This effort presents challenges in energy consumption, thermodynamics, and infrastructure management. Large-scale AI projects will require innovative energy-efficient solutions, integrating renewable energy with cutting-edge computing infrastructure to meet their immense power demands. As AI workloads extend to edge devices, the development of low-power, high-performance chips will be critical to making AI viable in mobile and embedded systems.

In healthcare, AI-driven drug discovery has the potential to significantly accelerate the drug approval process—particularly in areas like cancer research—by improving success rates and eliminating costly early-stage trials.

Panel Discussion: Silicon to Systems Innovations for AI

A distinguished panel of experts from Astera Labs, TSMC, Broadcom, Arm, and Meta explored key innovations in AI-driven semiconductor development.

The discussion highlighted how AI is transforming chip design decision-making through advanced reasoning models that optimize design trade-offs and system performance. Panelists also examined strategies to reduce AI model training time, enabling faster deployment and iteration cycles.

Ensuring reliable connectivity in AI-powered devices was another key focus, as seamless data transmission is critical for AI-driven systems. Panelists emphasized the importance of designing products that align with customers’ development schedules while maintaining technological advancements.

Finally, they underscored the significance of strong IP partnerships, which foster collaboration and accelerate innovation across the semiconductor industry.

Panel Discussion: Re-Engineering Automotive Engineering

The complexities of integrating AI into automotive engineering were a central focus of this discussion, featuring experts from Ansys, Mercedes-Benz R&D North America, and Synopsys.

A key challenge in this domain is achieving seamless multi-domain interactions, where hardware, software, and AI technologies must function cohesively. Another critical factor is the high cost and complexity of data collection for advanced driver-assistance systems (ADAS), which is essential for training and validating AI-driven automotive solutions.

Mercedes-Benz R&D North America shared its approach to edge processing, where data is processed locally in vehicles rather than being stored or transmitted to the cloud—reducing latency and improving efficiency. The discussion also explored the role of synthetic data in training AI models. While synthetic data is widely used for validation, panelists noted that its effectiveness in training AI models remains limited. Additionally, electronic control units (ECUs) continue to evolve, playing an increasingly vital role in advancing autonomous driving capabilities.

The panel also addressed the rapid pace of product development in the automotive industry, particularly in China, where the minimum viable product (MVP) approach enables fast iterations but raises concerns about long-term reliability.

Panel Discussion: Custom High Bandwidth Memory (HBM) for AI

A panel featuring experts from AWS, SK hynix, Samsung, Marvell, and Synopsys explored this topic.

High Bandwidth Memory (HBM) has been a cornerstone of AI advancement, enabling high-speed data access with low latency, which is critical for AI workloads. In this discussion, panelists examined the role of HBM in AI and the ongoing debate between traditional HBM and custom HBM (cHBM) solutions.

While HBM offers benefits such as reducing I/O space by at least 25%, it comes at the cost of significantly higher power consumption. This tradeoff presents a challenge in balancing efficiency and performance.

Customization in memory solutions also poses scalability challenges. Memory vendors struggle with standardized production when customization is required, potentially hindering mass adoption. The lack of an agreed-upon architectural standard for custom HBM further complicates widespread deployment. However, future iterations of UCIe (Universal Chiplet Interconnect Express) are expected to bridge the gap between custom and traditional HBM solutions, promoting greater interoperability and scalability in AI hardware.

Panel Discussion: The Productivity Promise – AI Apps from RL to LLMs to Agents

This panel moderated by industry journalist Junko Yoshida and featuring experts from Nvidia, Microsoft, AMD, Intel, and Synopsys explored the broader implications of AI beyond chip design.

A key transformation in the industry is the shift from AI as a tool to AI as an autonomous problem solver, marking the emergence of Agentic workflows. In this new paradigm, AI no longer simply assists with tasks but takes on a proactive, autonomous role in solving complex problems.

As AI evolves, its impact on education becomes increasingly significant. Preparing the next generation of semiconductor engineers to collaborate with AI-driven design tools is essential, ensuring they can leverage these technologies in innovative ways.

AI’s learning capabilities are another major asset, enabling it to uncover insights and make connections that humans may not have initially considered—enhancing decision-making and innovation.

Additionally, the use of generator-checker models exemplifies a powerful approach in AI systems. In this model, AI agents act as both creators and validators, working in tandem to ensure reliability and accuracy, thereby advancing the trustworthiness and effectiveness of AI applications.

Panel Discussion: Realizing the Promise of Quantum Computing

A panel featuring experts from IBM Quantum, the University of Waterloo, Intel, Qolab, and Synopsys explored how quantum computing intersects with AI.

Quantum computing is still in its early stages, with significant progress made, but practical, large-scale systems may take decades to develop. While quantum mechanics is a broad phenomenon, its applications extend beyond quantum computers to various technological systems.

The relationship between quantum computing and AI is bidirectional, where Quantum for AI and AI for quantum both play vital roles in advancing each other. On one hand, quantum computing has the potential to significantly enhance AI by accelerating computations, enabling faster and more efficient model training. On the other hand, AI is being leveraged to optimize quantum systems, improving algorithms and error correction methods. However, building stable qubits remains a major challenge. Maintaining quantum states for computation is notoriously difficult due to their inherent sensitivity to external factors such as temperature and electromagnetic interference. Despite these challenges, the promise of quantum computing lies in its ability to handle complex problems beyond the capacity of classical computers.

To unlock high-value business applications, millions of qubits will be necessary, driving continued research and development to make quantum computing a practical and transformative tool for industries like AI, pharmaceuticals, and cryptography. A hybrid approach combining classical and quantum systems is necessary for practical quantum computing, leveraging existing infrastructure to control qubits. Effective system-level design, including hardware, software, and control systems, is crucial for making quantum systems functional.

Scalability remains a major challenge, requiring millions of qubits and error correction to achieve practical quantum computing, much like the scaling challenges faced by traditional semiconductors. The quality of qubits and the need for error correction are critical, as reliable qubits are essential for minimizing errors.

Manufacturability is key to scaling quantum systems, and the expertise of the semiconductor industry will help accelerate progress. Long-term investment and collaboration between the private and public sectors, as well as academic institutions and companies, were discussed as vital for advancing the field. Although quantum computing holds immense potential, practical applications will take time to develop, with ongoing experimentation driving innovation.

Revolutionary Results in Evolutionary Timescales

Toward the end of the Synopsys Executive Forum, I had an informal conversation with Aart de Geus, where he shared his insights on various topics, including sustainable power, renewable energy, CO2 emissions, his work with NGOs, and the role of nuclear power plants in future energy grids. This chat, which took place between sessions and was not part of the formal agenda, provided valuable perspectives on the broader implications of technological advancements and the rapid adoption of new technologies in the semiconductor industry.

Aart summed up the mission that drives both Synopsys and the entire semiconductor ecosystem, remarking that in the face of these critical challenges, the semiconductor industry is delivering “revolutionary results in evolutionary timescales.”

Summary

The Synopsys Executive Forum provided a comprehensive look at the future of AI, semiconductor engineering, and computing. The event sessions were thoughtful, organized, and productive, making the forum a top-notch experience. As the industry moves toward more autonomous AI systems, re-engineered automotive solutions, and scalable quantum computing, collaboration across the ecosystem will be critical in driving sustainable, high-impact innovation. Looking forward to more such events.

Also Read:

Evolution of Memory Test and Repair: From Silicon Design to AI-Driven Architectures

DVCon 2025: AI and the Future of Verification Take Center Stage

Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity


Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON

Podcast EP282: An Overview of Andes Focus on RISC-V and the Upcoming RISC-V CON
by Daniel Nenni on 04-09-2025 at 6:00 am

Dan is joined by Marc Evans, director of business development and technology at Andes. Marc has over twenty years of experience in the use of CPU, DSP, and Specialized IP in SoCs from his prior positions at Lattice Semiconductor, Ceva, and Tensilica. During his early career, Marc was a processor architect, making significant contributions to multiple microprocessors and systems at Tensilica, HP, Rambus, Rise Technology, and Amdahl.

Dan discusses the significant focus of Andes on commercialization of RISC-V with Marc. He also discusses the upcoming Andes RISC-V CON. Marc explains that Andes does this conference at a time that doesn’t conflict with key events done by RISC-V International. The idea is to provide a broad coverage of the RISC-V movement throughout the year. Marc provides an overview of the event, which includes engineering-level presentations along with partner contributions, strategic presentations and a new Developer’s Track, which provides a deep dive into key areas. There are also fireside chats, high-profile talks from industry luminaries and the opportunity to network with conference attendees.

The event will be held at the DoubleTree Hotel in San Jose on April 29, 2025. You can learn more about the conference and register to attend for free here. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.