Bronco Webinar 800x100 1

Topics for Innovation in Verification

Topics for Innovation in Verification
by Bernard Murphy on 12-21-2021 at 6:00 am

signpost min

Paul, Raúl and I are having fun with our Innovation in Verification series, and you seem to be also, judging by the hit rates we’re getting. We track these carefully to judge what you find most interesting and what seems to fall more under the category of “Meh”. Paul and others also get informal feedback in client meetings but it would be great if we could get active feedback from you, the readers, on what topics would most interest you. We’d like to tune our picks to your preferences.

For example, we’re planning an upcoming review on a paper on dynamic coherency testing because support is strong from multiple directions that verification teams want more input here. In that spirit, I have a few questions for you. I’m looking forward to your feedback. Quick comments or carefully considered, voluminous responses are fine. We’ll use your feedback as an input to our future topics for Innovation in Verification.

Application papers versus academic papers

We have tended to pick academic papers since these are, at least in principle, most likely to aim at breakthroughs. Applications are no less worthy but more aimed at very targeted in-house optimizations; apps to simplify or improve a specific verification objective. If tied to specific vendor tools that may also limit broad interest.

Topic areas

We’ve looked at most areas in verification. At the block level there’s always opportunity to improve coverage, also how quickly we can get to coverage. System-level verification is wide open. Lots of opportunity to debate subsystem testing, coverage, how best to define tests at the system level, the relative merits of synthetic versus real-life tests. Then there are the non-functional KPIs: performance, power, security, safety, especially as architectures for managing security and safety continue to evolve.

Post-silicon debug is clearly topical, reflecting limitations in how well (or not so well) we are able to limit escapes in pre-silicon verification. Optimizing the total verification flow, beyond individual run performance, is also picking up. This is in part in reducing total regression times through learning optimization. Even more broadly, many readers are experimenting with Agile methods, integrating with design processes for continuous integration and deployment (CI/CD) flows.

We could also cover more in some areas we have neglected: mixed signal verification, ML hardware verification and virtual modeling are examples.

Vertical extensions

Vertical validation is becoming increasingly important. In automotive, aerospace, the IoT, HPC, medical and many other domains, system objectives are moving much closer to silicon. As a result, completing a test plan needs to comprehend verification objectives and also system validation objectives. One indication is the growing importance of requirements traceability. This is from high-level design down inside the software and silicon. While looking for papers on traditional verification topics, I’ve also come across related papers. These are on system-level validation for robotics and other autonomous applications, suggesting trends towards these cross-domain validation problems.

This applies particularly for example to sensing and sensor fusion. The front-end here is obviously AMS, though there can be significant digital content to control calibration. Fusion is important, especially in safety-critical systems. This requires close interaction between hardware and software to ensure real-time reaction to changes.

Lots of opportunities to explore existing domains more deeply and add new domains to explore. Please let me know what you think, either as a comment or email me directly (info@findthestory.net)

Also Read

Learning-Based Power Modeling. Innovation in Verification

Battery Sipping HiFi DSP Offers Always-On Sensor Fusion

Memory Consistency Checks at RTL. Innovation in Verification


DAC 2021 Wrap-up – S2C turns more than a few heads

DAC 2021 Wrap-up – S2C turns more than a few heads
by Ron Green on 12-20-2021 at 10:00 am

IMG 7547
SemiWiki Founder Daniel Nenni and S2C Cofounder Mon-Ren Chen

Now that the 58th Design Automation Conference held this year in San Francisco has concluded, we take a minute to look back at the results and ascertain what it meant for our company.

Unfortunately, many popular tradeshows held in the time of Covid have suffered a drop in attendance, and DAC was no exception. Despite this however, S2C is pleased to report the quality of visitors to our booth was quite high. In contrast to several other vendors, we chose to exhibit and demonstrate our latest hardware and software offerings on the show floor, giving customers the chance to examine our products live and close-up.

High-performance prototyping stands at the crossroads of two powerful trends: the increasing size and complexity of SoCs, coupled with the need to validate systems at-speed on real hardware. S2C is ideally positioned to capitalize on these trends. DAC gave us the opportunity to demonstrate how we can satisfy designer’s needs, and helped us by generating good interest, good questions, and good leads.

On the show floor we displayed a number of our latest prototyping products, including the Prodigy Logic System 10M based on the industry’s largest FPGA, Intel’s Stratix 10 GX 10M. Also on display were our Xilinx-based systems, the Prodigy Logic S7-19P, and S7-9P, both getting their fair share of attention.

But without question, the highlight of our booth was our new Prodigy Logic Module LX2. Built around Xilinx’s largest Virtex Ultrascale+ device, the LX2 houses eight VU19P FPGAs, producing a machine of unrivaled speed and capacity. Furthermore, the LX2 architecture provides for interconnecting up to 8 LX2 units, offering the breathtaking capacity of 64 FPGAs – a true heavy-lifting machine. Several customers commented how impressed they were with the system’s specs and capabilities. In the world of high-performance prototyping, the LX2 looks like the one to beat.

But what good is a high-performance prototype if you can’t perform debug? This was a question on the minds of many. We addressed that issue by showing the MDM Pro, our multi-FPGA debug module that is compatible with all our prototyping platforms. The MDM Pro captures the event data generated by long-running events, allowing the on-board FPGA memory to be preserved for your design needs. When we pointed out the MDM Pro module comes as a built-in part of both the 10MQ and S7-19PQ systems, there were several nods of approval.

One skeptic however, remained unconvinced. “Nice hardware,” he was heard to say. “But you guys got any software to go with this stuff?”

Absolutely. If 18 years in the business has taught us anything, it’s that productivity software is critical to the prototyping effort. We were able to demonstrate our premiere software offering PlayerPro, which itself is comprised of several modules: Compile, for partitioning, downloading, and configuring a prototype; Runtime, a module for dynamic control of your prototype; and Debug, to configure and work with the MDM Pro hardware.

Also demonstrated was ProtoBridge: an application that supports 4GB/s data transfers via a C API to an AXI4 bus driver. This tool enables high bandwidth data transfers – such as video – between a PC and an FPGA.

To round out our offerings, we displayed a portion of our Prototype Ready IP Library: a rich collection of plug-and-play daughter cards that include memory and interfaces to speed your prototype development. During the course of the show, we responded to inquiries from customers working in fields as diverse as Storage, AI, Networking, and Automotive. There was a fair amount of interest from universities as well.

More than one customer asked about system availability. We were pleased to give the answer everyone wanted to hear: these systems are ready and available now. Your Christmas presents may be stuck in the back of Santa’s Workshop, but not S2C! Our products are in stock and ready to ship with short lead times and fast deliveries!

Overall, DAC was a successful show for us, helping to give our products visibility in the market, and setting the stage for next year. DAC is a unique and useful event, and we’ll definitely be back – at DAC!

About S2C

S2C, is a global leader of FPGA prototyping solutions for today’s innovative SoC/ASIC designs. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 500 customers and more than 3,000 systems installed, our highly qualified engineering team and customer-centric sales team understands our users’ SoC development needs. S2C has offices and sales representatives in the US, Europe, Israel, China, Korea, Japan, and Taiwan.

For more information, please visit www.s2ceda.com

Also Read:

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

S2C EDA Delivers on Plan to Scale-Up FPGA Prototyping Platforms to Billions of Gates

S2C’s FPGA Prototyping Solutions


Bringing PCIe Gen 6 Devices to Market

Bringing PCIe Gen 6 Devices to Market
by Daniel Nenni on 12-20-2021 at 6:00 am

Truechip PCIe Gen 6

PCIe is a prevalent and popular interface standard that is used in just about every digital electronic system. It is used widely in SOCs and in devices that connect to them. Since it was first released in 2003, it has evolved to keep up with rapidly accelerating needs for high speed data transfers. Each version has doubled in throughput, with updates coming every few years – except for the notable gap between version 3.0 and 4.0. PCI Gen 6 is expected to be have its final release in 2021.

PCIe Gen 6 supports 126 GB/s in each direction when using 16 lanes. The individual lane speed will be 7.87GB/s. Many changes were made in the specification to achieve these data rates. Most significant of these is the change to PAM-4 (pulse amplitude modulation with four levels) and the addition of ECC. Numerous other changes were made to the protocol as well. As is always the case, PCIe Gen 6 interfaces will be backward compatible with earlier versions to ensure interoperability. All of this is good news to system designers in need of higher bandwidth and flexibility.

However, these changes mean that designing and verifying complete and correct functionality has become even more difficult. Lots of system designers will choose to use IP blocks to help implement PCIe Gen 6 in their designs. Whether or not the interface controller and PHY are developed in house or outsourced, complete verification is a necessity.

Developing a test suite takes a level of effort on par with or greater than developing the PCIe IP itself. Fortunately, Truechip, a developer of verification IP(VIP), offers a complete test suite and verification environment for PCIe Gen 6. Their VIP is fully compliant with the latest PCIe Gen 6 specifications. It is built, using years of experience, to be light weight, with an easy plug-and-play interface to ensure rapid deployment.

Their PCIe testbench includes agents for the Root Complex and the Device Endpoint. They each come with bus functional models for the TL, DL and PHY layers. In addition, there is a PCIe Bus Monitor which performs many useful operations. It supports assertions, coverage, as well as checkers for the TL, DL and PHY. All of this is connected to a scoreboard to help monitor test results.

The test bench is backward compatible with all of the relevant earlier specifications. It supports precoding for 32GT/s and 64GT/s, PAM-4 signaling, Flit and non-FLIT mode and the new PIPE 6.0 specification. It can be configured to support from x1 to x16 link widths. All low power management states, including the new L0p state are available. The list of features in the documentation and data sheet is comprehensive and supports every feature in the specification.

To ensure comprehensive validation the test environment and test suite provide a wide range of tests. User can run basic and directed protocols tests. There are also random tests and error scenario tests. Truechip includes assertions and cover point tests. Lastly there are compliance tests, to ensure the finish product will work smoothly with other PCIe Gen 6 devices. There is a full set of documentation that goes through the integration process and can be used as a reference guide during use.

The time frame for bringing PCIe Gen 6 devices to market is fast approaching. Truechip has already had customer deliveries for this VIP product. Having ready to go VIP can make a big positive impact on the development and testing schedules for products that rely on PCIe Gen6. With PCIe playing such a large role in SOCs and device operation, it is crucial to support the latest standard and be able to offer the highest interoperability, quality and reliability. Truechip offers much more information about their PCIe Gen 6 VIP on their website. If you are developing products that rely on PCIe Gen 6, it might be worth a look.

Also read:

PCIe Gen 6 Verification IP Speeds Up Chip Development

USB4 Makes Interfacing Easy, But is Hard to Implement

TrueChip CXL Verification IP

 


Pattern Shifts Induced by Dipole-Illuminated EUV Masks

Pattern Shifts Induced by Dipole-Illuminated EUV Masks
by Fred Chen on 12-19-2021 at 10:00 am

Pattern Shifts Induced by Dipole Illuminated EUV Masks

As EUV lithography is being targeted towards pitches of 30 nm or less, fundamental differences from conventional DUV lithography become more and more obvious. A big difference is in the mask use. Unlike other photolithography masks, EUV masks are absorber patterns on a reflective multilayer rather than a transparent substrate. Most articles on EUV lithography do not go into the details that SPIE papers do [1,2]. Figure 1 shows the fundamentally different aspects of EUV masks.

Figure 1. An EUV mask differs from an ideal mask in that the absorbers partly transmit EUV light into the multilayer substrate, which then reflects the light back through the absorbers for a second pass.

EUV masks are essentially like attenuated phase shift masks, where the phase shift is very different from the ideal 180 degrees. In fact, the phase shift depends on the illumination angle as well as the absorber thickness, and a phase shift comes from propagating through the multilayer as well. Since all illumination is from one side, shadowing is a natural consequence as well [1].

For the tighter pitches, dipole illumination is used. For the case of EUV mask illumination within the plane of incidence, this means one illumination angle will be larger than the other. This results in the image from one angle being dimmer and shifted in phase, i.e., position, relative to the other (Figure 2). For this image calculation, the absorber was assumed to be 60 nm thick, with an optical constant of 0.94+0.04i, and the multilayer reflectance at 13.5 nm wavelength was obtained from the CXRO database [3].

Figure 2. A dipole illumination tuned for 30 nm pitch would produce a symmetric image for an ideal mask, but not so for an EUV mask.

It’s apparent that the image is displaced by an amount that depends on the illumination angle. Figure 3 shows that dipoles spaced apart with different distances from the pupil center produce different shifts. The closer the two poles are to the center, the less asymmetry there is, as the illumination angles differ by less.

Figure 3. Different dipole illumination positions produce different EUV image shifts. The closer to the center, the less disparity between the images of the two pole angles, and therefore the less asymmetric the image.

The pattern shifts are more severe for tighter pitches. Therefore, it should not be a surprise to expect growing consideration, for example, of different absorbers in “next-generation” EUV masks.

References

[1] S. Sherwin et al., “Advanced multilayer design to mitigate EUV shadowing,” Proc. SPIE 10957, 1095715 (2019).

[2] E. van Setten et al., “Multilayer optimization for high-NA EUV mask3D suppression,” Proc. SPIE 11517, 115170Y (2020).

[3] https://henke.lbl.gov/optical_constants/multi2.html

Related Lithography Posts


“Too Big To Fail Two” – Could chip failure take down tech & entire economy?

“Too Big To Fail Two” – Could chip failure take down tech & entire economy?
by Robert Maire on 12-19-2021 at 6:00 am

Semiconductors too big to fail

-Chips enable tech sector which underpins entire economy
-Is the US chip sector “Too Big To Fail”?
-If US chip industry fails, does tech & everything else follow?
-How chip/Taiwan crisis compares to 2008 financial meltdown

It was the best of times, it was the worst of times

We find it an incredible juxtaposition that we are experiencing the greatest strength and growth that the semiconductor industry has ever seen yet the threat to the largest participants and the world order of the semiconductor industry is nothing less than existential.

Semiconductor demand and strength is off the charts yet Taiwan/TSMC is literally under the gun and Intel needs a string of “Hail Mary” plays to get back in the technology race which defines success in the industry.

Things could look incredibly different in a short period of time. We are on the precipice of potential extreme change.

We may have seen a similar movie before in the overheated financial sector that led up to the 2008 financial crisis that could have had a cataclysmic ending if it were not for some strong, last ditch intervention. This is not to suggest that the sub prime mortgage industry and current chip demand are similar, one was almost fraudulent and the other is real demand. The only parallel we draw is that strong intervention may be needed to avert a potentially much larger problem. The risks that the semiconductor industry faces are both self inflicted as well as external.

Intervention may also take other forms than just pure financial assistance as these risks are varied.

Is the US semiconductor industry Too Big To Fail?

What could happen if Intel fails to get back into the technology race? What if TSMC remains the only leading edge foundry for fabless US chip companies such as Nvidia, Qualcomm and AMD? Can Micron keep up in the memory industry in the face of a torrent of spending in Asia?

Obviously hundreds of billions of dollars of semiconductor revenue are at risk for US based companies but perhaps much more importantly trillions of dollars of goods that rely on the semiconductor industry for the very heart of their products. From the auto industry to defense to communications, the cloud, mobile phones and well beyond.

Cutting out the heart of the stock market and years of a rally

Lets just think about what the stock market would have looked like and what it would look like in the future without the semiconductor industry. as it is today.

In case you have been living under a rock with your money stuffed in a mattress, the main driver of the stock market has been tech stocks. Yes, other sectors have done well but tech, and especially semiconductors, has been at the heart of strength in the market and creating much of the momentum.
The stock market and with it, many investors net worth would look a lot different.

Semiconductors are inside and critical to so many industries it somewhat reminds us of how AIG was ingrained in the very fabric of the financial industry in many products and companies that people did not understand until the risk of their failure exposed just how deeply they were ingrained. But it took a $180B government bailout to rescue AIG from taking the whole financial sector down with it.

The chip shortage that has stopped cars from shipping is just the beginning and tip of the iceberg that goes much further and deeper into the economy with much greater risk.

So the question is would the failure of the US semiconductor industry do less or more damage to the US economy than if AIG had failed? Maybe its also worth a $180B investment and not just a small $52B Chips for America, which is a relative drop in the bucket.

Maybe its not just Too Big to Fail but perhaps Too Critical to Fail as well…..

The risks are both internal and external

Given the international nature of the semiconductor industry and the US’s reliance on Taiwan and Korea the risk profile is much more complex and less controllable as it is not contained within our borders or jurisdiction.

The risks are also less measurable and more subject to “Black Swan” events that are not well defined nor easy to protect against.

An example:

President Xi gets impatient in his quest to reunite Taiwan with the mainland and is further aggravated by the US denying him semiconductor technology. He decides that if he can’t have Taiwan and its chips that the US can’t have it either and launches one conventional low yield missile into TSMC’s leading fab that produces chips for Apple, Intel, AMD, Nvidia and Qualcomm etc… putting it out of commission.

Very few people would die or be injured, it would not start a war, but the stock market would implode and the US tech industry would fall apart. A similar threat exists in Korea as Samsung fabs are within artillery, not even missile, range of Kim Jong Un who is clearly less stable.

These risks while low are more than zero

Internal threats are more similar to 2008’s financial crisis in that they were self inflicted, either not paying attention or failure to execute or similar. The semiconductor industry requires laser like focus, copious spending and a long term view that is measured in years and not quarterly results. Developing and maintaining the talent pool is a very long term effort that is key to the industry’s success.

Intervention & protection is both financial and systemic

The semiconductor industry needs both financial help, to build many new fabs in the US as well as the surrounding infrastructure needed but it also needs proper political and governmental support to foster the industry, protect it and incentivise it.

While the Chips for America act is a good start, it is only a down payment and without additional terms and guardrails it could potentially be much less effective.

Chips for America needs a parallel bill that sets up the proper environment and infrastructure to foster the industry in the US.

The financial bailout needs to come with clear terms and ownership positions to insure it is properly spent and taxpayers get a return on their investment much as what happened in the case of AIG.

There also needs to be some triage and prioritization of resources such that more critical companies in the semiconductor industry get more attention much as Lehman was not on the priority list while AIG was. We would suggest strong focus on leading edge and all the associated enablers…..

Don’t get fooled by the current good times

We also think that there may be some who question putting money and effort into an industry that is currently in “party mode” with their stocks at record highs in record time with more business and profits than they can handle.
This will not last forever. There could be a soft or hard landing but there will be a landing at some point. Supply almost always catches up with demand.
Part of the need for action is to protect the industry when things aren’t as good as they are now.

When the shortages are over the issues risk being forgotten

Only over the last year has the general public and the political public gotten a very small inkling of the semiconductor industry and only through secondary means such as the shortage of cars or other shortage related issues.
When the shortages are over, we risk being forgotten about again as the general public focuses on the new topic du jour.

Even though semiconductors are both ubiquitous, pervasive and critical they are none the less “invisible” in our daily lives and thus easily forgotten unless a problem happens.

Its hard to buy insurance or care about a potential problem you can’t even remember. The semiconductor industry spent many years in obscurity and could easily return.

The stocks

While many of the risks and issues are low probability we would still pay attention to exposure that our portfolio would have to some of these events in the semiconductor industry that could snowball into much larger problems for tech and the general economy.

Many investors I speak to do not immediately grasp the direct connection between Taiwan/China and the greater tech industry and global economy. How some small events there could create larger ripples through other sectors.
This “Butterfly Effect” that the semiconductor industry has is not fully recognized nor understood. Investors would be well served to look at these interrelationships and dependencies. its not just autos.

Spending time and money to help the industry is cheap insurance relative to the percentage of the US and global economy impacted by semiconductors. Spending tens of billions to avert trillions of risk.

We would also try to predict which semiconductor related companies would benefit most and in what ways from potential assistance efforts….and just as importantly who would lose out or be negatively impacted from those efforts.
At the top of our list fo Too Big to Fail (or perhaps Too Critical to Fail) would certainly be Intel and Micron. All the equipment companies who hold the manufacturing know how, such as Applied Materials, KLAC, Lam, and foreigners such as ASML and TEL. EDA companies and some material companies. While these companies are certainly not at risk right now they are none the less critical to the industry and its health.

While TSMC and Samsung are certainly highly critical, they are most critical for their fabs to be built in the US that are within the safety of our borders as insurance for our tech and greater economy that currently rely on semiconductors from less stable regions.

The semiconductor industry is truly Too Big to Fail even though its products are too small to be seen and hidden in plain sight.

Also Read:

Supply Chain Breaks Under Strain Causes Miss, Weak Guide, Repairs Needed

Semicon West is Semicon Less

KLAC- Foundry/Logic Drives Outperformance- No Supply Chain Woes- Nice Beat


Podcast EP53: Breker’s New CEO Weighs in on the Company, DAC and the Future of Verification

Podcast EP53: Breker’s New CEO Weighs in on the Company, DAC and the Future of Verification
by Daniel Nenni on 12-17-2021 at 10:00 am

Dan is joined by Dave Kelf, who was recently appointed CEO of Breker Verification Systems. Dave discusses Breker’s unique approach to verification of complex systems, what its future impact will be and what Breker will be doing at DAC.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


COVID Still Impacting Electronics

COVID Still Impacting Electronics
by Bill Jewell on 12-17-2021 at 6:00 am

Electronics Production 2020 2021

Electronics production has been volatile over the past two years primarily due to the COVID-19 pandemic. Electronics production three-month-average change versus a year ago is shown below for key Asian countries. COVID-19 shutdowns affected production in early 2020. Trends in 2021 show a strong bounce back.

The key trends by country are:

South Korea – electronics production was not significantly impacted by COVID, with March 2020 three-month-average production up 25% from a year earlier. Recent growth has been strong, 20% or higher since June 2021. South Korea avoided significant COVID slowdowns by emphasizing early detection, containment, and treatment.

China – the source country of the COVID virus imposed major shutdowns in early 2020, resulting in March 2020 production down 6% from a year earlier. Production recovered beginning in April 2020. Early 2021 showed a strong recovery with March 2021 up 36% from the weak period a year earlier. China’s production growth has stabilized in the 12% to 13% range since May 2021.

Taiwan – production was moderately affected by COVID shutdowns, with March 2020 up only 5% from a year earlier, in contrast to the 20% plus growth in most of 2019. Since April 2020, Taiwan production growth has been relatively stable in the range of 5% to 11%.

Japan – electronics production has been declining for several years primarily due to manufacturing shifting to lower wage countries. Growth turned positive in August 2019 before declining again in December 2019. Japan avoided significant COVID cases early in the pandemic, but a surge of cases in July and August 2020 led to some shutdowns and a production decline of 18% in September 2020. Production turned positive in February 2021, reaching a peak of 11% in June 2021. Since June, production has decelerated, reaching 0% change in October 2021.

Vietnam – electronic production has been on a strong growth trend in recent years primarily due to manufacturing shifts from China and South Korea. COVID related shutdowns led to a production decline of 12% in May 2020. Production quickly recovered reaching 25% growth in January 2021. Vietnam was held up as an example to the world when its strict containment measures led to relatively few COVID cases in 2020. However, Vietnam saw a sharp increase in COVID cases driven by the Delta variant beginning in July 2021. A shutdown from July 8 to October 1, 2021, in much of the south of Vietnam resulted in an electronic production decline of 11% in August 2021. The decline eased to 6% in November 2021.

The following chart shows electronics production three-month-average change versus a year ago for the United States (U.S.), United Kingdom (UK), and the 27 countries of the European Union (EU27).

The key trends are:

United States – electronics production was not significantly affected by COVID-19 as shutdowns of factories were isolated. U.S. production growth was weak in 2019, ranging from a 1% decline to a 2% increase. The weakness continued in the first half of 2020 before picking up to growth in the 6% to 9% range in August 2020 through August 2021. In September and October 2021 growth moderated to about 4%.

United Kingdom – production was generally weak in 2019, ranging from a 3% decline to 4% growth. The UK instituted a nationwide lockdown due to COVID beginning in March 2020 and easing up in May and June of 2020. Production declined by 19% from a year ago in May and June of 2020. Year-to-year growth did not turn positive until April 2021 and peaked at 13% growth in June 2021 compared to the weak June of 2020. Growth has been decelerating in the last several months, with October 2021 down 3% from a year ago. In addition to COVID, the UK has been dealing with the effects of Brexit (the UK withdrawal from the EU) which became official at the end of 2020.

European Union – countries had varied lockdown policies in early 2020, but the overall effect was a 6% decline in production in April and May of 2020 versus a year earlier. Production rebounded to a strong 24% growth in January 2021 and has remained in the 18% to 30% growth range since. EU electronics production has been a beneficiary of Brexit as some production previously done in the UK has now shifted to the EU. Also, the EU27 as a whole has been less impacted by COVID than the UK. According to Worldometer, the UK has 162 COVID-19 cases per 1,000 people, twice the rate of 80 in Germany, the largest EU manufacturer.

The impact of the COVID-19 is also reflected in the unit shipment data of two key electronic devices: PCs and smartphone. According to IDC, PC shipments fell 8% in 1Q 2020 versus a year earlier, primarily due to COVID related production shutdowns. In the next three quarters, PC shipments grew strongly, from a 14% increase in 2Q 2020 to 26% in 4Q 2020. 1Q 2021 was up 55% compared to the weak 1Q 2020. Demand for PCs was strong due to the pandemic. Shutdowns and other restrictions forced many people to work from home and many students to learn from home. The increase in electronic communication led many households to acquire or upgrade PCs. PC growth moderated to 4% in 3Q 2020 as much of the demand increase was satisfied. In addition, component shortages limited some PC production. This month, IDC projected PC shipments will increase 13.5% in 2021 and moderate to 0.3% growth in 2021.

Smartphone shipments were heavily hurt by the COVID pandemic in 1Q 2020 since most production is done in China, which shutdown most of its manufacturing in early 2020. IDC stated shipments were down versus a year ago by 12% in 1Q 2020 and down by 17% in 2Q 2020. Shipment growth recovered to 26% in 2Q 2021 and 13% in 2Q 2021. In 3Q 2021, shipments were down 7% from a year ago. IDC attributes the decline to component shortages and other logistical problems. IDC expects year 2021 smartphone growth will be 5.3%, slightly moderating to 3.0% in 2022.

The world and the electronics industry are still feeling major effects from the COVID-19 pandemic. Worldometer shows the world is currently in a fifth wave of the virus. However, the death rate from COVID-19 is declining due to vaccinations, better treatments, and improved control methods. Electronics production has been hurt by various production shutdowns, component shortages and logistical challenges. These issues will probably continue through most of 2022. By 2023, electronics production should be back to typical trends. I am not using the world normal since nothing will see normal again for several years.

Also Read:

2021 Finishing Strong with 2022 Moderating

Semiconductor CapEx too strong?

Auto Semiconductor Shortage Worsens


Is Ansys Reviving the Collaborative Business Model in EDA?

Is Ansys Reviving the Collaborative Business Model in EDA?
by Daniel Nenni on 12-16-2021 at 10:00 am

Evolution of Multiphysics Complexity

The Electronic Design Automation (EDA) industry used to be a bustling bazaar of scrappy startups, along with medium sized companies that dominated a technology space, and big main-line vendors. The annual Design Automation Conference was noisy, hectic, and sprawled over multiple large convention halls. This diversity meant that designers needed to stitch together their chip design flows with point tools from many software tool vendors. As a consequence, design companies all set up dedicated internal methodology teams (or ‘CAD teams’) to evaluate, set up, integrate, and maintain a suite of design software tools for their chip design teams.

That all changed with the strong consolidation that swept through EDA in the early 2000’s. This change mirrored the consolidation experienced across all sectors of the semiconductor industry, including silicon manufacturers, fab equipment vendors, and chip design companies themselves. The EDA industry now counts only 4 major vendors that make up the bulk of the electronic design software market: Synopsys ($3.7B), Cadence Design Systems ($2.7B), Siemens EDA (~$1.8B), and Ansys ($1.7B).

One casualty of this consolidation drive was the abandonment of the open, collaborative business model espoused by the earlier EDA companies. Instead, a closed garden mentality took over that strove to put in place “full flow”, single vendor, exclusive contracts. Despite some limited success, this approach never really succeeded, especially at the major semiconductor houses that provide the bulk of EDA revenues.

There are two major reasons for the failure of this model: Firstly, customers prefer not to tie themselves to a single vendor and lose their leverage in commercial negotiations. But, economics aside, it was always a technical non-starter. The reality is, and always has been, that no single vendor provides competitive technical solutions for the complete range of requirements from major semiconductor customers. This fact has become even more salient with the rapid technical evolution of both Moore’s Law and More-than-Moore that is leading to radical change in design challenges:

  • Ultra-low voltage, high speed silicon processes blur the line between analog and digital – high speed interconnect on interposers now routinely requires detailed electromagnetic field analysis. And Dynamic Voltage Drop now contributes about 30% to total path timing at 7nm and below.
  • 3D-IC multi-die systems and chiplets have blurred the lines between IC and PCB design techniques.
  • Power dissipation has become the number 1 issue for many applications and has blurred the lines between chip and package design. 3D-IC and chiplet designers at the early floorplanning stage now need to worry about thermal management, cooling, heat sinks, and concerns over mechanical stress/warpage reliability.

The result has been a resurgence in the realization that chip design is an incredibly complex multiphysics problem and that no single company has the breadth and depth of technology to solve it all. Ansys, for one, has embraced this reality by leading the industry in reviving the traditional open platform approach to EDA. They have vigorously pursued collaborations, partnerships, and joint developments with other vendors to address deep technical issues facing designers and create unique cross-disciplinary solutions.

The range of Ansys’ collaborations reflects the already broad range of engineering analysis tools it sells. An early step down this road started in 2017 when Ansys and Synopsys partnered to integrate Ansys RedHawk-SC power integrity analysis natively inside Synopsys’ Fusion Compiler implementation product. This collaboration has deepened with the release of Synopsys 3DIC Compiler that relies on Ansys RedHawk-SC Electrothermal for thermal and interposer analysis of 3D-ICs.

Ansys has also collaborated with Siemens EDA to deliver a direct link between Siemens’ Veloce hardware emulator and Ansys PowerArtist RTL power analysis tool. This push towards collaboration was on full display at the recent IDEAS Forum hosted by Ansys where we saw keynote speeches by Tom Lillig, Technology Business Leader at Keysight,  Siva Yerramilli, corporate VP for Strategy and System Architects at Synopsys, and Ted Pawela, chief Ecosystem Officer at Altium. There was also a presentation by Gilles Lamant from Cadence Design Systems on joint optical solutions. This is an unprecedented range of competing companies that nevertheless see value in coming together to address specific problems for their customers and I believe it may herald the revival of a more cooperative business trend in building viable electronic design flows.

Ansys has embraced this market development with its own internal reorganization that saw the merger of its Semiconductor division and Electronics division under the leadership of John Lee, GM Electronics and Semiconductor Business Unit. John is a strong proponent of providing open platforms to allow the broadest array of design tools to work together and exchange data. Under his leadership, Ansys has broadened its relationship with Synopsys, shifted its own development priorities to embrace open platforms, and has reached out to complementary tool providers to create industry solutions for Ansys’ diverse customer base. I think this is an interesting trend that may well benefit the EDA industry in general.

Also Read

A Practical Approach to Better Thermal Analysis for Chip and Package

Ansys CEO Ajei Gopal’s Keynote on 3D-IC at Samsung SAFE Forum

Ansys to Present Multiphysics Cloud Enablement with Microsoft Azure at DAC


Ramping Up Software Ideas for Hardware Design

Ramping Up Software Ideas for Hardware Design
by Bernard Murphy on 12-16-2021 at 6:00 am

Bridging chasm

This is a topic in which I have a lot of interest, covered in a panel at this year’s DAC; Raúl Camposano chaired the session. I had earlier covered a keynote by Moshe Zalcberg at Europe DVCon late in 2020; he now reprises the topic. Given the incredible pace of innovation and scale in software development these days, I don’t see what we have to lose in looking harder for parallels. And ramping up software ideas for hardware design.

Moshe Zalcberg on why we should think about this

Moshe makes the point that chip design is outrageously expensive, and designers are understandably averse to risky experiments. But as design continues to become even more outrageously expensive, the downside of not looking for new ideas may become even more compelling.

He cites relatively slow change in for example verification methodologies versus more rapid evolution in mobile phone technologies, semiconductor processes and most popular software languages. We’re staying level on verification effort and respins as complexity continues to grow but he wonders if we could do better. Competition isn’t only with complexity; we’re also competing with each other. Any team that is able to find significant advantage in some way will jump ahead of the rest of us. Yes, change is risky but so also is stasis.

He suggests a range of ideas we might borrow from the software world, from open-source to Python as a language (for test especially), to Agile, continuous integration and deployment (CI/CD), leveraging data more effectively and of course AI. Tentative steps are already being taken in some areas; we always need to be thinking about what we might borrow from our software counterparts.

Rob Mains on Open-Source Chip Design

I hear a lot of enthusiasm for open-source EDA, but what about open-source design? The RISC-V ecosystem is showing this can work. Rob Mains is executive director at Chips Alliance, whose mission is to encourage collaboration and open-source practices in hardware. Chip Alliance is part of the Linux Foundation which is a good start. They have heavyweight support from Google, Intel, SiFive, Alibaba and a lot of other companies and universities.

Rob sees a primary focus in promoting an open ecosystem, through for example standard bus protocols like OpenXtend and the Advanced Interface Bus between chiplets. He also sees opportunity for certain open-source EDA directions which could change the game, for example an Open PDK infrastructure. In this spirit he mentioned also Chisel and Rocketchip. Also the BAG family of generators from Berkeley, the FASoC family of tools from University of Michigan and layout synthesis from UT Austin.

Rob has some interesting predictions for this decade, for example that 50% or more design will be open source based and that design entry to implementation will no longer require human intervention. Bold claims. Viewed as a moonshot I’m sure they’ll drive some interesting progress.

Neil Johnson on Agile Design

Neil Johnson, now at Siemens, is a very accomplished thinker and speaker in this domain. He has embraced Agile and related methods whole heartedly yet accepts that he lives in a world of skeptics who “don’t buy any of this Agile nonsense”. He starts with his own ten-year journey in Agile, a testament to his credibility in this domain. That he follows with a poem he wrote titled “Your Agile is for Chumps”. This is a gentle but persuasive walk through counterarguments to the opposition he has heard to Agile methods.

I won’t ruin the experience by attempting to summarize this presentation. You should really watch the video (link below). I will say that he had me convinced, not by beating me over the head with claims that my arguments are wrong but by gentle reasoning that there’s a different way to look at the components of agile. And that perhaps traditional approaches may not be as solid as we think.

Vicki Mitchell on MLOps

This talk, presented by Vicki Mitchell, may require a couple of cognitive jumps for most of us. First you need to understand what DevOps is in the software world. According to AWS “this is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.” In other words, not the end software products but all the infrastructure and ecosystems that support the development of those products. These concepts are creeping into hardware design through adoption of tools like Jama, Jenkins and others. Vicki has presented multiple times on the value of DevOps practices in hardware design.

Now think about that philosophy for ML, particularly as used in adoption of ML in design practices. Hang on tight; this does make sense, but it is mind-bending. Vicki presents it as putting data and machine learning together. The summary I find easiest to understand is that use of ML in design cannot depend on a one-time training activity. It must continuously improve as new designs are encountered and new data is generated. MLOps is a way to make ML adjust flexibly yet robustly to this landscape of changing data, requirements and quite possibly models.

When ML becomes a part of even a waterfall flow with regressions, or CI/CD flows, it must fit into the DevOps flow. It should fit into CI/CD, automated testing, pipelining. So that failing or slow components don’t roadblock the whole flow as tests, design data and constraints change. In CI/CD flows, everything in the flow must adapt to supporting continuous integration and be continuously deployable. There’s a lot more good stuff here and in all the talks. Watch the video.

Finally a shout-out to Raúl, my partner with Paul Cunningham on the Innovation in Verification blogs. He started with a remembrance of Jim Hogan, who we all miss. Raúl asked several insightful questions at the end of each talk. This blog would run to many thousands of words if I did justice to his question and each of the talks. Again, watch the video!

Also Read:

Verification Completion: When is enough enough?  Part I

Verification Completion: When is Enough Enough?  Part II

On Standards and Open-Sourcing. Verification Talks


Top 10 Takeaways from DAC 2021

Top 10 Takeaways from DAC 2021
by Tom Dillinger on 12-15-2021 at 2:00 pm

stopped clock license model

The “in-person” portion of the Design Automation Conference (DAC) was recently held in San Francisco.  (As several presenters were unable to attend, a “virtual” program is also available.)  The presentations spanned a wide gamut – e.g., technical advanced in design automation algorithms;  new features in commercial EDA tools;  technical and financial trends and forecasts;  and, industry standards activities.

In recent years, the DAC Organizing Committee has expanded the traditional algorithm/tool focus to include novel IP, SoC, and system design techniques and methodologies.  The talks in the Design and IP Track provided insights into how teams are addressing increasing complexity afforded by new silicon and packaging technologies, as well as ensuring more stringent requirements on reliability, security, and safety are being met.

Appended below is a (very subjective) list of impressions from DAC.  It is likely no surprise that several of these refer to the growing influence of machine learning (ML) technology on both the nature of chip designs and the EDA tools themselves.  The impact of cloud-based computational resource was also prevalent in the trend presentations, as well.  Here are the Top 10 takeaways:

(10)  systems companies and EDA requirements

Several trend-related presentations highlighted the investments being made by hyperscale data center and systems companies in internally staffing SoC design teams – e.g., Google, Meta, Microsoft, Amazon, etc.  A panel discussion that asked representatives from these companies “What do you need from EDA?” could be summed up in four words:  “bigger, faster emulation systems”.

(Parenthetically, one rather startling financial forecast was, “50% of all EDA revenue will ultimately come from systems companies.”)

(9)  domain-specific architectures

The financial forecast talks were uniformly upbeat (see (8)) – hardly a financial bear in sight.  The expectation is that (fabless, IDM, and systems) IC designers will increasingly be seeking to differentiate their products by incorporating “domain-specific architectures” as part of SoC and/or package integration.  As will be discussed shortly, the influence of ML opportunities to add to product features is a key driver for DSA designs, whether pursuing data center training or data center/edge inference.

The counter-argument to DSA designs is that ML network topologies continue to evolve rapidly (see (6)).  For data center applications, a general-purpose programmable engine, such as a GPGPU/CPU with a rich instruction set architecture may provide more flexibility to quickly adapt to new network types.  A keynote speaker provided the following view:  “It’s a tradeoff between the energy costs of computation versus data movement.  If a general-purpose (GPU) architecture can execute energy-intensive MAC computations for complex data types, the relative cost of data movement is reduced – no need for specialized hardware.”

(8)  diverse design starts

A large part of the financial optimism is based on the diversity of industries pursuing new IC designs.  The thinking is that even if one industry segment were to stall, other segments would no doubt pick up the slack.  The figure below illustrates the breadth in design starts among emerging market segments.

As the EDA industry growth relies heavily on design starts, their financial forecasts were very optimistic.

(7) transition to the cloud

Another forecast – perhaps startling, perhaps not – was “50% of all EDA computing cycles will be provided by cloud resources”.

The presenter’s contention was that new, small design companies do not have the resources or the interest in building an internal IT infrastructure, and are “more open to newer methods and flows”.

Several EDA presentations acknowledged the need to address this trend – “We must ensure the algorithms in our tools leverage multi-threaded and parallel computation approaches to the maximal extent possible, to support cloud-based computation.” 

Yet, not everyone was convinced the cloud transition will proceed smoothly…  read on.

(6)  “EDA licensing needs to adopt a SaaS model”

A very pointed argument by a DAC keynote speaker was that EDA licensing models are inconsistent with the trend to leverage cloud computing resources.  He opined, “A stopped watch is correct twice a day – similarly, the amount of EDA licenses is right only twice in the overall schedule of a design project.  The rest of the IT industry has embraced the Software as a Service model – EDA companies need to do the same.”

The figure below illustrates the “stopped watch licensing model”.

(The opportunity to periodically re-mix license quantities of specific EDA products in a multi-year license lease agreement mitigates the issue somewhat.)  The keynote speaker acknowledged that changing the existing financial model for licensing would encounter considerable resistance from EDA companies.

(5)  ML applications

There were numerous presentations on the growth anticipated for ML-specific designs, for both very high-end data center training/inference and for low-end/edge inference.

  • high-end data center ML growth

For ML running in hyperscale data centers, the focus remains on improving the classification accuracy for image and natural language processing.  One keynote speaker reminded the audience, “Although AI concepts are decades old, we’re really still in the very early stages of exploring ML architectures for these applications.  The adaptation of GPGPU hardware to the ML computational workload really only began around 10 years ago.  We’re constantly evolving to new network topologies, computational algorithms, and back-propagation training error optimization techniques.”

The figure below highlights the complexity of neural network growth for image classification over the past decade, showing the amount of computation required to improve classification accuracy.

(The left axis is the “Top 1” classification match accuracy to the labeled training dataset.  One indication of the continued focus on improved accuracy is that neural networks used to be given credit for a classification match if the correct label was in the “Top 5” predictions.)

  • low-edge/edge ML growth

A considerable number of technical and trend presentations focused on adapting ML networks used for training to the stringent PPA and cost requirements of edge inference.  High-precision data types for weights and intermediate network node results may be quantized to smaller, more-PPA efficient representations.

One presenter challenged the audience with the following scenario. “Consider Industrial IoT (IIoT) applications, where sensors and transducers integrated with low-cost microcontrollers provide real-time monitoring.  In many cases, it’s not sufficient to simply detect a vibration or noise or pressure change or image defect that exceeds some threshold – it is necessary to classify the sensor output to a specific pattern and respond accordingly.  This is ideally suited to the use of small ML engines running on a corresponding microcontroller.  I bet many of you in the audience are already thinking of IIoT ML applications.”

(4)  HLS and designer productivity

There were several presentations encouraging design teams to embrace higher levels of design abstraction, and correspondingly high-level synthesis, to address increasing SoC complexity.

Designers were encouraged to go to SystemC.org to learn of the latest progress in the definition of the SystemC language standard, and specifically, the SystemC synthesizable subset.

(3)  clocks

Of all the challenges faced by design teams, it was clear from numerous DAC presentations that managing the growing number of clock domains in current SoC designs is paramount.

From an architectural perspective, setting up and (flawlessly) exercising clock domain crossing (CDC) checks for proper synchronization is crucial.

From a physical implementation perspective, developing clock cell placement and interconnect rouging strategies to achieve latency targets and observe skew constraints is exceedingly difficult.  One insightful paper highlighted the challenges in (multiplexed) clock management and distribution for a PCIe5 IP macro.

Increasingly, physical synthesis flows are leveraging “useful skew” between clock arrival endpoints as another optimization method to address long path delays (and, as an indirect benefit, to distribute instantaneous switching activity).  A compelling DAC paper highlighted how useful skew indeed helps close “late” timing, but may aggravate “early” timing paths, necessitating much greater delay buffering to fix hold paths.  The author described a unique methodology to identify a combination of useful skew implementations to adjust both late and early clock arrival endpoints to reduce hold buffering, saving both power and block area.

Static timing analysis requires diligent attention to clock definitions and timing constraints – multiply that effort for multi-mode, multi-corner analysis across the range of operating conditions.  One presentation focused on the need to focus on improved methods to characterize and analyze timing with statistical variation.  In the future, it will become more common to tell project management that “the design is closed to n-sigma timing”.

(2)  ML in EDA

There was lots of interest in how ML techniques are influencing EDA tools and flows.  Here are some high-level observations:

  • ML “inside”

One approach is to incorporate an ML technology directly within a tool algorithm.  Here was a thought-provoking comment from a keynote talk:  “The training of ML networks takes an input state, and forward calculates a result.  There is an error function which serves as the optimization target.  Back propagation of partial derivatives of this function with respect to existing network parameters drives the iterative training improvement.  There are analogies in EDA – consider cell placement.”

The keynote speaker continued, “The current placement is used to calculate a result comprised of a combination of total net length estimates, local routing congestion, and critical net timing.  The goal is to optimize this (weighted) result calculation.  This is an ideal application to employ ML techniques within the cell placement optimization algorithm.”

  • ML “outside”

Another methodology approach is to apply ML techniques “outside” an existing EDA tool/algorithm.  For example, block physical implementation is an iterative process, from initial results using early RTL through subsequent RTL model releases.  Additionally, physical engineers iterate on a single model using various combinations of constraints provided throughout the overall flow, to evaluate QoR differences.  This accumulation of physical data over the development cycle can serve as the (design-specific) data set for ML training, helping the engineer develop an optimal flow.

(1)  functional safety and security

Perhaps the most challenging, disruptive, and nonetheless exciting area impacting the entire design and EDA industry is the increasing requirement to address both functional safety and security requirements.

Although often mentioned together, functional safety and security are quite different, and according to one DAC presenter “may even conflict with each other”.

FuSa (for short) refers to the requisite hardware and software design features incorporated to respond to systematic and/or random failures.  One presenter highlighted that the infrastructure is in place to enable designers to identify and trace the definition and validation of FuSa features, though the ISO 26262 and IEC 61508 standard structure, saying, “We know how to propagate FuSa data through flows and the supply chain.  Correspondingly, we have confidence in the usage of software tools.”  Yet, a member of the same panel said, “The challenge is now building the expertise to know where and how to insert FuSa features.  How do you ensure the system will act appropriately when subjected to a random failure?  We are still in the infancy of FuSa as an engineering discipline.”

The EDA industry has responded to the increasing importance of FuSA developments, by providing specific products to assist with ISO 26262 data dependency management and traceability.

Security issues have continued to arise throughout our industry.  In short, security in electronic systems covers:

  • side channel attacks (e.g., an adversary listening to emissions)
  • malicious hardware (e.g., “Trojans” inserted in the manufacturing flow)
  • reverse engineering (adversaries accessing design data)
  • supply chain disruptions (e.g., clones, counterfeits, re-marked modules;  the expectation is that die will be identified, authenticated, and tracked throughout)

The design implementation flow needs to add security hardware IP to protect against these attack “surfaces”.

Here’s a link to another SemiWiki article that covers in more detail the activities of the Accellera Security for Electronic Design Integration working group to help define security-related standards and establish a knowledge base of progress in addressing these issues – link.

To me, the impact of product FuSa and security requirements will have pervasive impacts on system design, IP development, and EDA tools/flows.

Can’t wait for the next DAC, on July 10-14, 2022, in San Francisco.

-chipguy