SNPS1670747138 DAC 2025 800x100px HRes

How I learned Formal Verification

How I learned Formal Verification
by Daniel Nenni on 12-11-2024 at 10:00 am

Bing Xue

Bing Xue is a dedicated Formal Verification Engineer at Axiomise, with a strong academic and professional foundation in hardware verification. He completed his PhD at the University of Southampton, where he conducted cutting-edge research on Formal Verification, RISC-V, and the impact of Single Event Upsets. Bing is proficient in RISC-V, System Verilog, and Formal Verification tools such as Cadence Jaspergold, and is skilled in Python and Linux, bringing a versatile and analytical approach to his work.

How I learned FV

I had no idea what Formal Verification (FV) was when I started my PhD,.  I spent six months exploring related papers, books, websites and open-source projects, as well as watching videos to learn about FV and System Verilog Assertion (SVA).  However, I faced several challenges during that time.

Some resources, despite being labelled as FV-focused, primarily discussed simulations. Others were too abstract, providing no practical details while some were too theoretical, presenting modelling and proving algorithms without real-world applications. After six months of study, I had a basic overview of FV but still didn’t know how to apply it to my project.

It took me another three months of hands-on practice with simple RISC-V designs to make progress. During that time, I made many mistakes and had to invest significant effort to understand and fix them. Searching for quality FV learning resources was time-consuming, and extracting the accurate information was even more challenging. I always thought that if I had access to well-structured FV courses, including theories, demonstrations, and practical labs with real world-designs, I could have completed my project faster and with better results.

Why Axiomise FV courses

I finished Axiomise FV courses last month. I believe they are the best courses for freshers and verification engineers. I wish I had discovered them earlier as they would have made a significant difference in my research journey.

FV is more than model checking

Most of the resourses I found provided only a general overview, covering the definition and history of FV.   These resources mainly focused on model checking, but FV is not just model checking!

The Axiomise FV courses cover not only model checking but also theorem proving and equivalence checking.  During my project, I mainly used model checking to evaluate fault tolerance and hardware reliability.  After completing the course, I was inspired to use equivalence checking to achieve improvement in my work.

Theory

I learned FV theories from books and papers.  These theories include transforming designs and specifications into mathematical models and formulas and proving formal properties with various (such as BDD- and SAT-based) algorithms.  However, are these theories truly essential for all verification engineers?

Given that formal tool can handle much of the modelling and proving, it is clear that verification engineers should focus more on why, when and how to use FV.  This is exactly what Axiomise FV courses emphasize.   These courses help verification engineers save their valuable time by focusing on the most critical and applicable concepts, rather than overwhelming them with unnecessary details.

Formal Techniques

A ‘smart’ formal testbench, composed of high-quality formal properties, significantly contributes to better performance by reducing run time and overcoming state explosion threshold.  But how can we develop formal properties with high qualities?

The Axiomise FV courses answer this question clearly: by applying formal (problem reduction) techniques to develop ‘smart’ formal testbenches.  These techniques, such as abstraction, invariants, assume-guarantee, decomposition, case splitting, scenario splitting, black-boxing, cut-pointing and mutation, are explained in detail within the course; accompanied by codes and examples for a deeper understanding.

What sets the course apart is the inclusion of step-by-step demos and labs that help learners master these problem reduction techniques.  All the other resources I found fail to explain formal techniques in such an easy-to-understand manner.  In my previous project, I didn’t apply all these techniques, which led to some inclusive results when verifying multipliers and dividers.  Now, I know effectively how to apply these methods to improve my project.

Demos and Labs

When learning to develop formal testbenches, I often wished for more high-quality demos and labs. Unfortunately, the resources I found typically offered either overly simplistic examples, like a basic request-and-acknowledge handshake protocol, or non-generalized designs, such as a specific meaningless hardware module.

I really enjoy the demos and labs in the FV courses. I could see their careful selection of designs used for demonstrations. For instance, the courses present FIFO, a fundamental structure in electronics and computing, as demonstration. Two brilliant abstraction-based methods are presented to exhaustively verify a FIFO: Two-Transaction and Smart Tracker.  Another valuable example is using invariants for scalable proof and bug hunting.

All serial designs, such as processors and memory sub systems, which are challenging to verify, can be represented and verified as FIFOs.  The FV courses also provide multiple demos and labs, such as variable packet design and micro-cache to demonstrate this concept.

From the FV courses, I strongly believe verification engineers can acquire all knowledge and skills required to formally verify complex designs.

The Complete FV flow: ADEPT

Most resourses agree that FV can be used for exhaustive verification, but the question is: how?  What is the overall process of FV?  How can one verify the correctness of a formal testbench? When is it appropriate to sign off?  These were questions I struggled with early on, as I couldn’t find any detailed standards or guidance. It took me considerable time to investigate and eventually realized that coverage was the key to answer these questions.

Axiomise addressed these challenges by developing ADEPT, the first industrial FV flow which clearly states the flow of FV sign-off.  The FV courses also introduce formal coverage.  Coverage in FV is more comprehensive than that in simulations.  These insights are invaluable for conducting efficient and confident FV workflows.

Benefits

Axiomise’s vision is to make formal normal and the FV courses effectively address three major misunderstandings about FV:

  1. FV is not a mystery. With the training from Axiomise, all engineers (whether they are design engineers or verification engineers) can (and should) use FV in all stages.
  2. FV is not a magic wand. A high-quality formal testbench is essential for effective bug hunting and exhaustive proof.  The FV courses provide all the necessary knowledge and skills to develop and evaluate such formal testbenches.
  3. Learning FV is not hard. Following the FV courses, even beginners can smoothly transition into formal verification engineers.
Summary

In summary, the Axiomise FV courses are an invaluable resource for anyone looking to master formal verification. I sincerely recommend the FV courses to all design and verification engineers.

Also Read:

The Convergence of Functional with Safety, Security and PPA Verification

An Enduring Growth Challenge for Formal Verification

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®


Accellera 2024 End of Year Update

Accellera 2024 End of Year Update
by Bernard Murphy on 12-11-2024 at 6:00 am

logo accellera min

From my viewpoint, standards organizations in semiconductor design always looked like they were “sharpening the saw”: further polishing/refining what we already have but not often pushing on frontiers. Very necessary of course to stabilize and get common agreement in standards but equally always seeming to be behind the innovation curve. Given the recent trend to prominent new technologies, particularly through system vendors getting into chip design, it is encouraging to realize that organizations like Accellera have already jumped (cautiously 😀) on opportunities to push on those frontiers. Standards are again acknowledging innovation in the industries they serve.

Progress in 2024

Here I’m just going to call out a few of the topics that particularly interest me, no slight intended to other standards under the Accellera umbrella.

Portable Test and Stimulus (PSS), defining a framework for system level verification, is one of these frontiers; the state space for defining system-level tests is simply too vast to be manageable with a bottom-up approach to functional verification. PSS provides a standard framework to define high-level system-centric tests, monitors, randomization, the kind of features we already know and love in UVM but here abstracted to system level relevance.

Coverage is such a feature, already provided in the standard but now with an important extension in the 3.0 update. RTL coverage metrics obviously don’t make sense at a system level. Randomization and coverage measurement should be determined against reasonable use-cases – sequences of actions and data conditions – otherwise coverage metrics may be misleading. PSS 3.0 introduces behavioral coverage to meet these needs.

You may remember one of my earlier blogs on work towards a Federated Simulation Standard (FSS). Quick summary: the objective is to be able to link together simulators in the EDA domain with simulators outside that domain, say for talking to edge sensors, drivetrain MCUs and other devices around the car, all communicating through CAN or automotive Ethernet. Similar needs arise in aircraft simulations.

This requires standards for linking to proprietary instruction set simulators and other abstracted models to enable an OEM/Tier1 to develop and test software against a wide range of scenarios. An obvious question is how this standard will fit with the Arm-sponsored SOAFEE standard. As far as I can see SOAFEE seems to be mostly about interoperability and cloud-native support for the software layer of the stack, still leaving interoperability at the hardware and EDA level less defined. That’s where I suspect FSS will concentrate first. FSS is still at the working group and user group stage, no defined release date yet, but Lu says that pressure from the auto companies will force quick progress.

Expected in 2025

I have always been interested in progress on mixed signal standards. Analog and RF are becoming more entangled with digital cores in modern designs. For example, sensing demands periodic calibration to adjust for drift, DDR PHYs must align between senders and receivers, and RF PHYs now support analog beamforming guided by digital feedback. All of which must be managed through software/digital controlled interfaces into the analog functionality.

Software-digital-analog verification is a more demanding objective than allowed for by traditional co-simulation solutions, which increases the importance of real-number modelling (RNM) methods and UVM support. Lu tells me that the UVM-MS working group now has a standard ready for board approval which he sees likely to happen after the holidays.

There was a complication in achieving this goal in as far as it requires (in some areas) extension to the System Verilog (SV) standard. SV is under control of IEEE rather than Accellera and IEEE update standards update only on a 5-year cycle. However, IEEE and Accellera work together closely and Accellera is busy defining those extensions in a backward compatible way. This effort is expected to complete fairly soon at which point it will be donated back to IEEE for consideration on their next update to the SV standard.

This all sounds complicated and still a long way off, but it seems that those Accellera recommendations are more or less guaranteed to be accepted into the next IEEE update. Tentatively (not an official statement) vendors and users might be able proceed much sooner with more comprehensive UVM-MS development once tools, IPs, etc are released to the interim standard.

Finally, Accellera is actively looking for new areas where it can contribute in support of the latest technologies. One area Lu mentioned is AI, though it seems discussion at this stage is still very tentative, not yet settled into any concrete ideas.

DVCon International Perspectives

DVCon, under the auspices of Accellera, is already well established in the US, Europe and India. Recently conferences launched in China, then Japan and then in Taiwan. Each of these offers a unique angle. Europe is big in system level verification and automotive given local interest in aerospace and the car industry. India is very strong in verification as many multinationals with Indian sites have developed teams with strengths in this area. (I can confirm this; I see quite a lot of verification papers coming out of India.)

Japan has a lot of interest in board-level design simulation, whereas Chinese interests cut across all domains. (I can also confirm this. Many research papers I review for the Innovation in Verification blog series come out of China.) DVCon activity in Taiwan is quite new and Accellera has chosen to collocate with related conferences like RISC-V. Good stuff. Wider participation and input can only strengthen standards.

Overall – good progress and I’m happy to see that Accellera is pushing on those frontiers!


Electrical Rule Checking in PCB Tools

Electrical Rule Checking in PCB Tools
by Daniel Payne on 12-10-2024 at 10:00 am

HyperLynx DRC min

I’ve known about DRC (Design Rule Checking) for IC design, and the same approach can also be applied to PCB design. The continuous evolution of electronics has led to increasingly intricate PCB designs that require Electrical Rule Checking (ERC) to ensure that performance goals are met. This complexity poses several challenges in design verification, often resulting in errors, inefficiencies, and increased costs. This blog post examines these challenges and introduces HyperLynx DRC, an EDA tool from Siemens, to address them.

Modern electronic products demand enhanced functionality and performance, directly impacting the complexity of PCB design and verification. The use of complex components, high-speed interfaces, and advanced materials requires thorough PCB checks to guarantee optimal performance and reliability. This level of complexity often stretches the capabilities of traditional verification methods. 

Several factors contribute to the challenges in PCB design and verification:

  • Error-Prone Processes: The intricate nature of complex PCBs makes the design process susceptible to errors. Oversights and mistakes during layout, component placement, and routing can compromise product functionality and reliability. Undetected errors lead to revisions, rework, and possibly complete redesigns, impacting project timelines and budgets.
  • Infrequent Checks: The labor-intensive nature of PCB checking processes discourages frequent checks throughout the design cycle. Delays in verification lead to accumulated errors and inconsistencies, making fixes challenging and time-consuming.
  • Late-Stage Error Detection: Detecting design errors in later stages of development is inefficient, leading to more modifications, increased development time and costs, and delayed time-to-market. This is particularly critical in industries with rapid technological advancements.
  • Simulation Challenges: Traditional signal and power integrity simulations involve analyzing numerous objects, including nets, planes, and area-fills. Collecting simulation models and running simulations for each object is labor-intensive and time-consuming, often exceeding the benefits gained.
HyperLynx DRC

To face these challenges, Siemens developed HyperLynx DRC, a rule-based checker that identifies potential PCB design errors using geometrical calculations. The key features are:

  • Predefined Rules: The software comes with over 100 predefined rules addressing various aspects of PCB design, including signal integrity, power integrity, electromagnetic interference, electrostatic discharge, analog circuits, creepage, clearance, and IC package-specific checks.
  • Efficient Embedded Engines: HyperLynx DRC utilizes various embedded engines, such as the geometry engine, graph engine, field solver, and creepage engine, for efficiently checking diverse technical challenges.
  • Management of False Violations: The tool provides a feature for managing false violations, allowing users to create object lists, apply rules to specific objects, and eliminate unnecessary checks, significantly reducing checking time.
  • Enhanced Filtering Capability: HyperLynx DRC enables the creation of object lists manually or automatically, offering filtering capabilities to focus on relevant objects.

The extensive capabilities of HyperLynx DRC can lead to long rule-based geometrical run times for large and complex designs. To address this, HyperLynx DRC provides the area-crop function, allowing users to isolate and analyze specific areas of the design. 

The area-crop function streamlines the verification process through:

  • User-Friendly Interface: Users can quickly specify an area by selecting nets or components using a wizard.
  • Automated Cropping: The wizard automatically crops the design with predefined merging from the selected objects and creates a new project for checking.

This function enables users to concentrate on specific design areas, reducing complexity, enhancing accuracy and speeding up run times during verification.

Case Study

MediaTek, a leading semiconductor company, used HyperLynx DRC’s area-crop function on a highly complex board. The board specifications were:

  • Layout file size: Over 300 MB
  • Layers: Over 40
  • Layout size: Over 22000 mil * 16000 mil
  • Components: Over 16,000
  • Nets: Over 11,000

The area-crop function was used as follows:

  • Segmentation of the Board: The board was divided into four sections using vertical and horizontal virtual cuts, creating top-left, top-right, bottom-left, and bottom-right areas. Two additional overlap zones were added at the intersecting regions to ensure thoroughness.
  • Accelerated Verification: Checking each section individually significantly reduced the overall run time, particularly for the complex GND signal Long Stub rule.
  • Reduced Complexity: Dividing the board into smaller sections simplified the intricate GND nets, enhancing performance and allowing for efficient error identification and resolution.
PCB layout with four areas selected

The implementation of the area-crop function yielded impressive results:

  • Time Reduction: Total checking time was reduced from 33 hours, 51 minutes, 53 seconds to just 44 minutes, a big time savings.
  • Enhanced Efficiency and Precision: Focusing on segmented areas allowed for more precise verification, ensuring design reliability and integrity without compromising the project timeline.
  • Optimized Resource Allocation: Large time savings and enhanced focus enabled optimized resource allocation, ensuring critical areas received proper scrutiny and facilitated a smoother design refinement process.
Run Times per area under Long Stub rule
Conclusion

HyperLynx DRC’s area-crop function is a powerful tool for PCB design verification. By enabling focused verification, reducing complexity, and significantly accelerating the checking process, HyperLynx DRC ensures project success and meets the challenges of modern PCB designs. This innovative solution ensures advancements in electronic products are characterized by reliability, precision, and efficiency.

Read the complete, 12-page white paper online.

Related Blogs

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools
by Mike Gianfagna on 12-10-2024 at 6:00 am

Synopsys Brings Multi Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

There is ample evidence that technologies such as high-performance computing, next-generation servers, and AI accelerators are fueling unprecedented demands in data processing speed with massive data storage, lower latency, and lower power. Heterogeneous system integration, more commonly called 2.5 and 3D IC design, promises to address these demands. As there is no “free lunch”, these new design approaches create similarly unprecedented demands associated with manufacturability and cost. It turns out the solution to this dilemma requires a combination of advanced design tools and purpose-built IP. One company stands apart with deep technology in both areas. Let’s explore how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.

Framing the Problem

There are two fundamental challenges to be met in order to bring heterogeneous system integration closer to reality – packaging and interconnect. Let’s examine the key requirements of each.

The need to process massive quantities of data is a driver for advanced packaging. There are many approaches here. 2.5D and 3D packaging have gained popularity as prominent solutions. In the 2.5D approach, two or more chips are side by side with an interposer connecting them. The interposer acts as a high-speed communication interface, creating greater flexibility to combine functions in one package.

For 3D IC, chips are connected with vertical stacking. This improves performance and functionality, allowing the integration of chiplets with multiple layers. A key trend is to shrink the bump pitch between the chiplets.  This improves interconnect distances and related parasitics.

All of these new design requirements and advanced packaging approaches have given rise to a significant change in interconnect strategies from traditional copper uBUMP to the most advanced uBUMP using 40um pitch, scaling even further down to 10um.

For 2.5D design, the connection between chips is made through redistribution layers on the interposer.  The distance between chips is usually around 100um. For 3D, the use of vertical stacking allows for direct connection between two chips, reducing the distance to less than 40um. The result is a much smaller substrate.

With this approach, IO no longer needs to be placed at the edge of the chip. Also, by using hybrid bond technology the vertical connection between chips is even tighter. Hybrid bonding connects dies in packages using tiny copper-to-copper connections (<10um).  

Synopsys has released an informative technical bulletin on all these trends. A link is coming. The figure below is taken from that document and shows these significant scaling trends.

Addressing the Problem

Taming these design challenges requires a combination of advanced EDA tools and specialty IP. Together, these two approaches form a winning design approach. Synopsys is well-known for its 2.5/3D design tools. Its 3D IC Compiler is a key enabler for multi-die integration. It turns out the design methodology required spans many disciplines. More on that in a moment. First, let’s examine how Synopsys brings multi-die integration closer with its 3DIO IP Solution.

This IP is specially tuned for multi-die heterogeneous integration, enabling the optimal balance of power, performance and area to address the packaging demands of 3D stacking. It turns out the 3DIO IP enables faster timing closure as well.

To better understand how it works, here are the key components of the solution:

  • Synopsys 3DIO includes a synthesis friendly Tx/Rx cell compatible with Synopsys standard cell libraries and a configurable charge device model for optimal ESD protection. As the number of IO channels increases, the optimized Synopsys 3DIO solution leverages the automatic place and route environment to place and route the IOs directly on the BUMP. The solution supports both 2.5D and 3D packaging using uBUMP and hybrid BUMP. The Synopsys 3DIO cell supports a high data rate and offers the lowest power solution, with an optimal area that fits within the hybrid BUMP area.
  • Synopsys Source Synchronous 3DIO (SS3DIO) extends the synthesizable 3DIO cell solution with a clock forwarding functionality to aid in lower bit error rate and ease timing closure between dies. The SS3DIO offers scalability to create custom-sized macros with optimal PPA and ESD. The TX, RX, and clock circuits support matched data and clock path, with data launched at the transmitting clock edge and captured at the corresponding receiving clock edge.
  • Synopsys Source Synchronous 3DIO PHY is a 64-bit hardened PHY module with inbuilt redundancy, optimized for the highest performance. The 3DIO PHY with CLK forwarding reduces bit error rate and eases implementation along with optimal placement of POWER/CLK/GND BUMP.

The figure below, also taken from the Synopsys technical bulletin provides an overview of how the Synopsys 3DIO IP Solution helps a with a variety of design challenges.

With new packaging technologies and increased density of interconnects, there is a significant rise in the IO channels for a given die area. The corresponding decrease in IO channel length increases performance but gives rise to the need for a more streamlined interface. The Synopsys 3DIO IP Solution provides a way to implement tunable, integrated multi-die design structures.

To Learn More

Addressing the challenges of heterogeneous system integration requires a combination of advanced design tools and IP that is optimized for this new design style. Synopsys provides strong offerings in both areas. As mentioned, a cornerstone of tools is the Synopsys 3DIC Compiler. You can learn more about Synopsys 3DIC Compiler here.  In the area of overall design flow, there is an excellent webinar that Synopsys recently presented with Ansys that delves into all the aspects of multi-die design. You can catch the replay of that webinar here.

You can access the technical bulletin that provides more detail on Synopsys 3DIO Solution here. And you can explore more about this IP, including access to the Synopsys 3DIO IP Solution datasheet on the Synopsys website here.  And that’s how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.


What is Wrong with Intel?

What is Wrong with Intel?
by Daniel Nenni on 12-09-2024 at 10:00 am

Intel Inside

One of the most popular topics on the SemiWiki forum is Intel, which I understand. Many of us grew up with Intel, some of us have worked there, and I can say that the vast majority of us want Intel to succeed. The latest Intel PR debacle is the abrupt departure of CEO Pat Gelsinger. To me this confirms the answer to the question, “What is wrong with Intel?”. But first let’s look at the big picture.

Hopefully we can all agree that AI will change the world over the next decade. It has already started, AI has made it down from the cloud into our cars, homes, laptops and phones. AI is also being weaponized and is a critical technology powering conflicts around the world. Clearly the leaders of the AI race will also be the leaders of the new AI infused world.

We should also be able to agree as to the importance of semiconductor technology and the importance of controlling the semiconductor supply chain. The pandemic fueled semiconductor shortages should still be fresh in our minds and if you think it could not happen again you are wrong.

When speaking about semiconductors you can separate them into two categories: logic and memory. Currently the majority of the leading-edge logic chips come from Taiwan (TSMC) with Intel and Samsung a distant second and third. The majority of the memory chips come from South Korea (Samsung and SK Hynix) with a distant third being U.S. based Micron and relatively new memory chip makers in China. To be clear, without memory logic is useless and without logic memory is useless.

My personal mantra has always been to plan for the worst and hope for the best so you will not be disappointed. Best case is that Taiwan and South Korea continue as they have for the last 50+ years. Worst case is that they won’t and the semiconductor supply chain is fractured and life as we know it is over. We may not go back to prehistoric days but to the younger generations it will seem like it.

There are two companies that are critical to semiconductor manufacturing in the United States: Intel (logic) and Micron (memory). Both are semiconductor legends, and both are critical to the survival of semiconductor manufacturing in the United States.

We can discuss Micron another time but a recent podcast with Intel’s Dr. Tahir Ghani (Mr. Transistor) reminded me of how important Intel is to the semiconductor industry. This week I am at IEDM, the premier semiconductor conference that showcases semiconductor innovation, and Intel is again front and center. This is a much longer technology discussion so I will simply say that Intel is critical to the semiconductor industry and the United States of America. If you think otherwise post it in the SemiWiki forum and thousands of working semiconductor professionals will explain it to you in painful detail.

This brings us back to the question: What is wrong with Intel? In my opinion Intel has had the worst board of directors in the history of the semiconductor industry. The hiring/firing of the three previous CEOs is a clear example. Seriously, we are talking about 20 years of bad decisions that have destroyed a semiconductor legend.

I posted a blog about The Legend of Intel CEOs in 2014 and updated it after Pat Gelsinger was hired and I will have to update it yet again. To me the bad board decisions started when they hired Paul Otellini (finance/sales/marketing) and then made an even worse pick with Brian Krzanich (manufacturing). The firing of Krzanich was even worse. How could the board have not properly vetted a man whose entire career was at Intel? The stated reason for the firing was absolute nonsense. Krzanich was the worst Intel CEO of all time and that is why he was fired. I would liken it to Pat Gelsinger’s “refirement” announcement. Why are board of directors allowed to reimagine CEO firings? They are heavily compensated employees of a publicly traded company. Intel pays these people millions of dollars in salary and stock options to safeguard the company. Where are the activist investors now?!?!

I also questioned the hiring of Robert Swan (finance) as Intel CEO. As it turns out Swan signed the megadeal with TSMC that saved the company from the Intel 10nm debacle and he was later fired for it. I do believe that if Swan stayed as CEO Intel would be fabless which is a very bad idea for the reasons stated above.

In regards to Pat Gelsinger, I was a big fan at the beginning but I told my contacts at Intel that the strategy should be to “speak softly and carry a big stick”. Intel’s culture has been based on being a virtual monopoly for so many years it really got the best of them. Making overly optimistic statements is a very risky proposition. At some point in time those statements will come back to haunt you unless you have the revenue to back them up. Intel did not, so Pat was out, just my opinion.

Let’s be clear, Intel is an IDM foundry like Samsung. TSMC is a pure-play foundry with hundreds of customers and partners collaborating on R&D and business strategy. No one company is going to compete with that. If you compare Intel Foundry to Samsung Foundry you get a very favorable result. The challenging TSMC head-to-head strategy has been tried before (Samsung and Globalfoundries) and billions of dollars were wasted. How did a seasoned board of directors allow this to happen?

As for the rumors of Intel being acquired, in my opinion Broadcom is the only company that qualifies. I’m confident Hock Tan could turn Intel around. I do not know how the finances would work but Hock’s management style would get Intel back into the green without a doubt.

Selling off the manufacturing part of Intel is ridiculous. Do you really think Intel Design can compete with Nvidia or even AMD without intimate ties to manufacturing? I was really excited when Intel signed the agreement with TSMC because it was a head-to-head design shootout with AMD, Nvidia, and Intel on the same process technology for the very first time. You tell me how that turned out. Are the new Intel products disruptive? The entire leading edge semiconductor industry is using TSMC N3. Will Intel really be relevant without manufacturing?

The quick fix for Intel is to be acquired by Broadcom. Bringing back Pat 2.0 and replacing the board of directors is another option. A third option is for the U.S. Government to step in and make semiconductor manufacturing a priority. Maybe Elon Musk can help Intel sort things out (kidding/not kidding).

Bottom line: Some very difficult decisions have to be made by some very qualified people. Take a look at the current Intel Board of Directors and convince me that they are the right ones to do it. You have an open invitation to be a guest on our podcast or post a written response to this blog.

I started SemiWiki 14 years ago to give semiconductor professionals a voice, a platform to participate in conversations for the greater good of the semiconductor industry. Let’s use it to help Intel become an industry leader again.


Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)
by Kalar Rajendiran on 12-09-2024 at 6:00 am

Synopsys SLM Solution Components

As industries become more reliant on advanced technologies, the importance of ensuring the reliability and longevity of critical systems grows. Failures in components, whether in autonomous vehicles, high performance computing (HPC), healthcare devices, or industrial automation, can have far-reaching consequences. Predicting and preventing failures is essential, and technologies like Digital Twins and Silicon Lifecycle Management (SLM) are key to achieving this. These tools provide the ability to monitor, analyze, and predict failures, thereby improving the dependability, and performance of systems.

“The reliability, availability, and serviceability (RAS) of complex systems such as data center infrastructure has never been more complex or critical,” said Jyotika Athavale, director of Engineering Architecture at Synopsys. “By integrating silicon health with digital twin simulations, we unlock powerful new capabilities for predictive modeling. This enables technology leaders to optimize system design and performance in new, impactful ways.”

Athavale addressed this topic during a talk she delivered at the Supercomputing Conference 2024 recently. She leads quality, reliability and safety research, pathfinding, standards and architectures for SLM solutions across RAS sensitive application domains.

Why Digital Twins Are Good for Prognostics

A Digital Twin is a virtual replica of a physical asset, created by combining real-time sensor data with simulation models. Digital twins enable continuous monitoring of system health and provide valuable insights for prognostics, which is the process of predicting future failures. By simulating different scenarios, digital twins can predict Remaining Useful Life (RUL), helping operators plan maintenance or replacements before a failure occurs. RUL refers to the time a device or component is expected to function within its specifications before failure. This proactive approach reduces downtime and optimizes system resources.

Types of Failures in Modern Systems

Failures in modern systems are categorized into permanent, transient and intermittent faults. Permanent faults, such as Time-Dependent Dielectric Breakdown (TDDB), Negative Bias Temperature Instability (NBTI), and Hot Carrier Injection (HCI), occur over time and lead to errors resulting in failure. In contrast, transient faults are temporary disruptions caused by external factors like radiation, which do not result in lasting damage.

In sub-20nm process technologies, degrading defects continue to evolve into the useful life phase of the bathtub curve, leading to issues like Silent Data Corruption (SDC), which can go unnoticed until critical failure occurs.

Why Failures Are Increasing

Despite technological advancements, failures are rising due to several factors. As devices shrink in size and increase in complexity, they become more vulnerable to failure. Smaller transistors, particularly below 20nm, are more susceptible to intrinsic wearout. Moreover, the demand for higher performance leads to greater stress on semiconductors. With interconnected systems in critical applications, even a single failure can have serious consequences, making predictive maintenance even more essential.

“To keep pace with these challenges, it’s essential to shift from reactive to predictive maintenance strategies,” said Athavale. “By integrating real-time monitoring and predictive insights at the silicon level, we can better manage the complexities of modern systems, helping avoid  potential failures and make maintenance more manageable..”

How to Monitor Silicon Health

Monitoring the health of semiconductor devices is crucial for identifying early signs of degradation. With embedded monitors integrated during the design phase, data on key performance metrics—such as voltage, temperature, and timing—can be continuously collected and analyzed. Silicon Lifecycle Management (SLM) systems include PVT monitors to track process, voltage, and temperature variations, path margin monitors to ensure signal paths remain within safe operating margins, and clock delay monitors to detect timing deviations. SLM also includes in-field analytics, which enables real-time monitoring and proactive decision-making throughout the device lifecycle.

Analyzing and Predicting Failures

Once the data is collected, it is analyzed to detect potential failures. Prognostic systems use advanced algorithms to analyze degradation patterns, such as those caused by TDDB, NBTI, and HCI, to predict when a component might fail. Predicting RUL is vital for managing system reliability, as early identification of failure allows for corrective actions like maintenance or replacement before the failure occurs.

RUL Prediction Using Synopsys SLM Data Solution

Synopsys’ SLM solution enables accurate RUL predictions through advanced monitoring and analytics, ensuring predictive maintenance and enhanced device reliability.

Key components of the Synopsys SLM solution include SLM PVT Monitors, which track process, voltage, and temperature variations to assess wear; SLM Path Margin Monitors, which detect timing degradation in critical paths; SLM Clock Delay Monitors, which identify clock-related performance anomalies; and SLM In-Field Analytics, which analyzes real-time data to predict failure trends.

The benefits of RUL prediction with Synopsys SLM include predictive maintenance, optimized reliability vs. performance, lifecycle and end-of-life planning, outlier detection, and catastrophic failure prevention. Corrective actions based on RUL analysis can include early decisions on recalls, implementing lifetime-extending mitigation strategies, and transitioning devices to a safe state to prevent further damage. Synopsys SLM provides actionable insights to minimize downtime, extend device lifespan, and ensure reliable performance throughout the lifecycle of semiconductor devices.

Summary

The combination of digital twins and Silicon Lifecycle Management (SLM) provides a powerful approach to managing the health and reliability of semiconductor devices. By enabling continuous monitoring, accurate failure prediction, and timely corrective actions, these technologies offer organizations tools to improve dependability, optimize performance, and reduce downtime. As electronic systems grow more complex and mission-critical, digital twins and SLM are becoming essential for predictive maintenance, ensuring long-term system reliability, and preventing costly failures.

Also Read:

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

 


Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor

Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor
by Daniel Nenni on 12-08-2024 at 6:00 am

Dan is joined by Dr. Tahir Ghani, Intel senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

In this very broad discussion, Tahir outlines the innovations over the past 60 years of Moore’s Law and how these advances will pave the way to a trillion transistor device in this decade. Tahir explains how transistor scaling, interconnect advances, chiplet-based design and advanced packaging all work together to keep Moore’s Law scaling alive and continue to deliver exponential increases in innovation.

Tahir will present an invited paper at a special session of the upcoming 70th IEDM called The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead. IEDM will be held from December 7-11, 2024 in San Francisco.  You can learn more about IEDM and register to attend here. His presentation will be Tuesday, December 10 at 2:20 PM. Tahir also reviews several other significant Intel papers that will be presented at IEDM.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen

Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen
by Daniel Nenni on 12-06-2024 at 10:00 am

Dan is joined by Dieter Therssen, CEO of Sigasi. Deiter started his career as a hardware design engineer, using IMEC’s visionary tools and design methodologies in the early days of silicon integration. Today, being CEO of Sigasi, a fast-growing, creative technology company is a perfect fit for Dieter. Having worked in that space for several companies, and well-rooted in the field of semiconductors, he forever enjoys the magic of a motivated team.

Dan explores the changing landscape of semiconductor design with Dieter. The demands of higher complexity and multi-technology systems are discussed. The impact of AI and specifically generative AI are also explored with a view toward how the unique front-end design tools offered by Sigasi can move technology forward.

ASIC/FPGA design and safety/security requirements are also reviewed in this broad discussion. Dieter explains how Sigasi is helping these trends and also discusses the new and unique community version of the Sigasi tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: GP Singh from Ambient Scientific

CEO Interview: GP Singh from Ambient Scientific
by Daniel Nenni on 12-06-2024 at 6:00 am

GP Sir Photo

Gajendra Prasad Singh, also known as GP Singh, is a seasoned tech professional with over 26 years of experience in advanced semiconductor chips. With a zeal to solve the most complex technical problems, he harped on a difficult journey to create programmable AI Microprocessors, that provide high-performance in a cost-effective manner while still consuming low power. To realize this vision, he co-founded Ambient Scientific along with a team of visionaries from California’s Silicon Valley. GP’s extensive technical experience and successful leadership record within global prestigious companies from building cutting-edge chips contributed to his deep understanding of not only the scientific first principles required for such breakthrough innovations at the grassroots level but also the business acumen to ensure practical feasibility. With an innate passion for everything electronics and computers, GP Singh is a fierce advocate of using semiconductors for the betterment of human lives.

Tell us about your company?

Ambient Scientific is a fabless semiconductor company born in Silicon Valley, pioneering ultra-low power AI processors that are fully programmable to enable endless AI applications.

Our breakthrough Analog In-Memory Compute technology called DigAn® is making AI computing more powerful and efficient than ever before, without compromising on flexibility and programmability. Compared to traditional AI hardware, our processors deliver thousands of time more AI performance at the same power consumption or thousands of time less power consumption for the same AI performance.

Our first product GPX10 leverages the DigAn® architecture to bring battery-powered, cloud-free, on-device AI applications to life, something considered nearly impossible before. From always-on voice detection to FaceID to predictive maintenance, GPX10 is enabling endless applications in various industries, all while running on as little as a coin cell battery with no dependence on the cloud or an internet connection.

With a full stack SDK designed to support industry standard AI frameworks (Tensorflow, Keras, etc.) and an AI compiler to enable custom neural networks, we enable rapid time to market for your AI applications. Order our DVK today and bring the power of AI away from the cloud, right on to your fingertips.

What problems are you solving?

While the AI application and software landscape has exploded in complexity, hardware has failed to keep up. Current chips used for AI processing (GPUs) were designed for graphics processing and not AI computing in mind, making them inefficient and extremely expensive. This is clearly visible with the rising compute costs as well as power consumption for all AI ranging from gigantic LLMs to edge AI for smaller electronic devices. We at Ambient Scientific have solved these problems by inventing not just analog in-memory computing but also new instruction set architecture designed specifically for AI computing. Our analog matrix multiplication engines deliver 40X AI performance at 70X lower power consumption compared to equivalent GPUs. Built with scalability and flexibility in mind, our architecture enables AI processors all the way from cloud and server level to MCU level for a wide variety of applications across several industries. Ambient Scientific’s mission is to make AI computing powerful, energy efficient and affordable for everyone alike.

What application areas are your strongest?

Our first product GPX10 is an AI processor targeted towards on-device AI applications for the tiniest of battery powered devices. It helps move AI processing from the confines of the cloud directly on to the device even if its running on as little as a coin cell battery. This improves application reliability, latency, data security as well as total cost of ownership. Some of our strongest application areas popular with customers are industrial predictive maintenance at the edge, anomaly detection on MedTech devices and cloud-free voice control for consumer products. While commonly these application would struggle with latency or reliability or miniscule battery lives due to AI processing, our processor solves all these problems without forcing any compromises or even affordability.

What keeps your customers up at night?

With the widespread utility of AI, product makers have realized the importance of incorporating AI features into their product roadmap to remain competitive and maintain differentiation. These products makers are now faced with a difficult choice:

  1. Run AI processing in the cloud and sacrifice latency, data privacy and reliability due to complete dependence on a network connection.
  2. Run AI on device and sacrifice accuracy and power efficiency which translates into significantly compromised battery life.

These limitations which ultimately translate into higher costs or compromised product quality are an absolute function of the current processors available in the market, none of which were designed for AI processing. They force debilitating sacrifices for the product maker that keeps them up at night, stuck between a rock and a hard place.

What does the competitive landscape look like and how do you differentiate?

The AI compute market for small electronic devices includes either MCUs, entry level GPUs or new age NPUs. While MCUs cannot deliver enough performance required for meaningful AI compute, entry level GPUs consume too much power, occupy too much area and are not affordable enough to fit within the boundaries of commercial viability for battery powered on-device AI applications. Several new age NPUs claim to be able to deliver low power AI solutions but with a heavy price to pay in lack of programmability. They tend to be fixed function with pre-defined neural networks and minimal room for customization. Our Ultra-low power AI chips not only deliver the highest performance/unit of power consumption (>7 TOPs/W), they’re smaller than a fingernail, affordable and most importantly completely programmable. Product makers care about programmability so they can differentiate their products from competitors’ by owning the software such as their proprietary AI algorithms. Programmability also makes their products future proof with the ability to push updates over the air as the software and application landscape evolves. Compared to fixed function or application specific NPUs, our processors offer a versatile and flexible platform for product makers to differentiate themselves with ultra-low power AI features well into the future.

What new features/technology are you working on?

Our claim to fame is a breakthrough on analog in-memory computing technology that enables us to leverage a combination of high speed digital and analog circuits designed specifically for AI computing. By leveraging cubic in-memory architecture and the analog matrix multiplication circuit, we’ve solved all the bottlenecks for AI computing while minimizing energy consumption to a fraction of contemporary architectures. Not only this, we’ve also created custom instruction set architecture from the ground up to enable flexibility and scalability in AI computing. This means we can build a wide range of processors from AI MCUs to high speed computer vision processors. Similarly our end to end software stack scales with our processors to adapt to the application needs of software developers for a wide variety of applications in several industries.

Also Read:

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions

CEO Interview: Rajesh Vashist of SiTime

CEO Interview: Dr. Greg Newbloom of Membrion


SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments
by Lauro Rizzatti on 12-05-2024 at 10:00 am

SystemReady Certified Ensuring Out of the Box Effortless Arm Processors Deployments Figure 1

When contemplating the Lego-like hardware and software structure of a leading system-on-chip (SoC) design, a mathematically inclined mind might marvel at the tantalizing array of combinatorial possibilities among its hardware and software components. In contrast, the engineering team tasked with its validation may have a more grounded perspective. Figuratively speaking, the team might be more concerned with calculating how much midnight oil will need to be burned to validate such a complex system.

The numerous interactions between hardware components such as large arrays of various processor types, memory types, interconnect networks, and a wide assortment of standard and custom peripherals and logic, with those of the software, like bare-metal software, drivers, and OS hardware-dependent layers, demand exhaustive functional verification. This process is computationally intensive, requiring billions of cycles to establish confidence in a bug-free design before manufacturing. The challenge is magnified by the relentless pace of technology, with new hardware and software versions constantly emerging while support for older iterations persists.

The Economies of Design Debug

A well-known axiom in the field of electronic design emphasizes that the cost of fixing a design bug soars an order of magnitude at each successive stage of the verification process. What might cost a mere dollar to correct at the basic block-level verification stage can skyrocket to a million dollars when the issue surfaces at the full SoC level, where hardware and software tightly interact.

The stakes become even higher if a design flaw goes undetected until after silicon fabrication. Post-silicon bug detection not only challenges engineering teams but can also lead to exorbitant costs that may drain a company’s financial resources. For small enterprises, such a scenario could be catastrophic, potentially leading to bankruptcy due to the redesign expenses and missed revenues caused by delayed product launches.

In the fiercely competitive semiconductor industry, the margin for error is razor thin. Therefore, rigorous verification at each stage of the design process is not just a best practice—it’s a critical safeguard against the potentially ruinous consequences of post-silicon bug detection.

On the bright side, the electronic design automation (EDA) industry has been investing heavily in resources and innovation to tackle the challenge of pre-silicon verification. The Shift-Left verification methodology is a testament to the industry’s commitment to addressing this challenge.

Arm: Linchpin Example of the Hardware/Software Integration Challenges

Among processor companies Arm is a case-study, because of its vast catalog of IP solutions. Arm offers a wide range of IPs, platforms and solutions, including CPUs, GPUs, memory controllers, interconnects, security, automotive, AI, IoT, and other technologies, each designed to meet the needs of different markets and applications. While the exact number is not publicly known, when adding updates and new releases, it amounts to thousands of different parts.

SoC designers using Arm components face an uphill verification challenge. Once they have selected the IP components, they must integrate them into complex SoC designs, add a software stack to bring the design to life, and ensure compliance, that is, compatibility or interoperability of the software with the hardware.

This process is fraught with uncertainties and risks.

Often, root causes of integration issues can be traced to non-compliant hardware, such as non-standard PCIe ECAM, PCIe ghost devices, or customized components like universal asynchronous receiver-transmitters (UARTs) or GICs. These issues can lead to design malfunctions, and potentially to serious failures. For instance, systems with complex PCIe hierarchies may lack firmware workarounds, custom OS distributions may receive limited security updates, and Windows servers and clients may be incompatible with non-compliant PCI ECAM.

To address these issues, a widely used but increasingly outdated method in the electronics industry is post-silicon testing. While it serves the purpose of debugging hardware flaws after fabrication, it is inherently inefficient. This approach contradicts the well-established principle of exponential cost increase, summarized by the phrase “the sooner, the cheaper.” By delaying the detection of design flaws until after silicon manufacturing, companies incur costly silicon re-spins and face extended timelines.

Fortunately, these issues can be mitigated much earlier in the development cycle through pre-silicon design verification. Pre-silicon verification, which includes simulation, emulation, formal and timing verification, allows engineers to identify and resolve problems before chips are fabricated, significantly reducing both costs and risks.

Arm’s Game-Changing Solution: From ServerReady to SystemReady

To mitigate this challenge, specifically to eliminate or at least reduce design re-spins and accelerate time-to-market, Arm introduced the SystemReady Certification Program in 2022. Building on the success of the ServerReady program, which was launched in 2018 and targeted server applications, SystemReady expands the coverage to include designs like edge devices, IoT applications, automotive systems, and more.

In general, hardware platforms provided by semiconductor partners come with their own software stacks, i.e., firmware, drivers and operating systems. These are often siloed, creating challenges for OS vendors and independent-software-vendors (ISVs) who need to run applications across different platforms, as these setups tend to be highly specific and fragmented. SystemReady aims to break down these silos, enabling software portability and interoperability across all Arm-based A-Class devices. When third-party operating systems are run on devices complying with a minimum set of hardware and firmware requirements based on Arm specifications, they boot seamlessly, and applications run smoothly.

SystemReady Program Foundation

The foundation of the Arm’s SystemReady program lies in two key specifications. First, the Base System Architecture (BSA), a formal set of compute platform definitions to encompass a variety of systems from the cloud to the IoT edge, ensures that in-house developed or 3rd-party sourced software works seamlessly across a universe of Arm-based hardware. Second, a set of accompanying firmware specifications called the Base Boot Requirements (BBR), complements the BSA definitions. These sets of rules are encapsulated in the BSA Compliance Suite, accessible on GitHub.

The suite is designed to run compliance tests during pre-silicon validation, eliminating the need for executing full operating systems to validate the environment. This early-stage validation prevents costly silicon respins, expedites system-level debugging, and accelerates time-to-market.

Arm’s Thriving SystemReady Partner Ecosystem

To reach a vast and diverse customer base while considerably enhancing the value of the Arm ecosystem, Arm has strategically partnered with a wide array of companies, including leaders in EDA, IP, and silicon providers. These collaborations play a critical role in driving the success of Arm’s SystemReady program, a certification initiative that ensures seamless compatibility across hardware platforms and software stacks.

Leading EDA Firms Accelerate SystemReady Certification Success

The pre-silicon validation of software stacks on newly designed hardware platforms demands hardware-assisted verification platforms, such as emulation and FPGA prototyping. These platforms are crucial for ensuring that new designs function correctly across the range of real-world conditions they will face. Best-in-class emulators and FPGA prototypes support comprehensive verification and validation processes, including hardware debugging, hardware-software co-verification, power and performance analysis, and even post-silicon testing for final checks.

Prominent suppliers of hardware-assisted verification platforms have joined Arm’s SystemReady program to enable their customers developing Arm SoCs and components to validate BSA compliance on HAV platforms using Transactors and Verification IPs. By participating in this program, EDA companies enable developers to validate software before silicon is even taped out, significantly reducing risks and development costs while accelerating time-to-market. The “PCIe SystemReady Certification Case Study” is an example of how a collaborative approach to pre-silicon validation can lead to successful certification and market-ready products.

Case Study: PCIe SystemReady Certification

The PCIe protocol is one of the most widely adopted and popular interfaces in the electronics industry, supporting a broad spectrum of applications, including networking, storage, GPU accelerators, and network accelerators. Each of these applications has distinct workload profiles that interact uniquely with system components, making PCIe a versatile yet complex protocol to integrate into hardware platforms.

Arm’s SystemReady certification program for the Arm architecture implementation including the complex PCIe subsystems is designed to ensure that these diverse applications can run seamlessly across various hardware environments. Achieving this certification requires adherence to a stringent set of compliance rules. These rules involve injecting specific sequences into the PCI port and monitoring responses at the PCI protocol layer, ensuring that the system can handle different types of workloads in real-world scenarios.

Synopsys and PCIe SystemReady Compliance

To streamline this process, Synopsys provides a PCI endpoint model specifically designed to meet Arm’s BSA certification standards. As shown in Figure 1, the SystemReady compliance program is a collaborative effort between Arm, Synopsys, and silicon providers. While the silicon partner focuses on developing the boot code, Synopsys contributes the Platform Abstraction Layer (PAL), a crucial software component that ensures smooth execution of Arm’s Compliance Suite tests on the SoC.

Figure 1: Block diagram describing how Arm and partners (Arm, Synopsys, Silicon Providers) work together

The PAL acts as an intermediary, enabling the Compliance Suite to communicate effectively with Synopsys’ transactors and Verification IPs (VIP) thus maximizing test coverage and capturing corner cases that may otherwise be overlooked. This integration ensures thorough testing of PCIe subsystems, providing developers with the confidence that their designs meet the highest standards of compatibility and performance.

Performance Verification and PCIe Protocol Evolution

In addition to compliance testing, performance verification is a critical aspect of pre-silicon design validation for PCIe interfaces. When systems upgrade to newer PCIe protocol generations, such as moving from PCIe Gen 5 to PCIe Gen 6, it involves significant investment. However, it’s vital to verify that the system is fully equipped to handle the additional bandwidth and performance enhancements offered by the newer protocol. Performance validation helps determine whether a developing SoC can manage various workloads and uncover any potential bottlenecks that might prevent the system from realizing the full benefits of the upgrade.

Synopsys’ support for integrating the Compliance Suite adds an additional layer of performance validation, allowing users to run comprehensive performance scenarios, particularly focused on the PCI subsystem. This ensures that the PCIe subsystem not only complies with Arm architectural requirements but also achieves optimal performance across a range of SoC applications.

Conclusion

By ensuring that software stacks are portable and interoperable across a diverse range of platforms—from cloud servers to edge devices and IoT applications—Arm’s SystemReady program plays a pivotal role in minimizing design risks. This standardization significantly reduces design costs and accelerates time-to-market, enabling companies to deliver products that function seamlessly out-of-the-box.

SystemReady not only enhances design efficiency but also opens new avenues for Total Addressable Market (TAM) expansion. By ensuring compatibility and reducing development complexity, the program allows Arm’s partners to target a broader range of industries and applications, providing them with a distinct competitive advantage.

These efforts underscore Arm’s commitment to empowering its ecosystem and driving innovation across the industry.

Also Read:

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 2 of 4)

The Immensity of Software Development and the Challenges of Debugging (Part 3 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)