ads mdx semiwiki building trust gen 800x100ai

Cut Out the Cutouts

Cut Out the Cutouts
by Aaron Edwards on 12-23-2021 at 6:00 am

ANSYS HFSS 3D Layout 1

In 2014, many of the customers that my team and I supported in North America were still using HFSS 3D to model boards and packages. These customers were content with that interface, able to get their models setup quickly, and were okay with the solution times because when HFSS gave them an answer, they knew it was the right answer. I was a little frustrated with this situation because Ansys had delivered some amazing technology in the HFSS 3D Layout environment, and within the HFSS solver, to allow customers to get to those extremely accurate answers, but in less time. As I have blogged about before, Ansys introduced key technologies that enabled substantial reduction in simulation times:

  • Phi Mesher – efficient meshing technique that tackles layers structures found in PCB, package and IC designs
  • Distributed Direct Solver – distributes the matrix solution during the adaptive pass or frequency sweep stages across multiple cores or across multiple nodes for improved scalability
  • Auto HPC – allows HFSS to optimally apply the total number of cores and/or machines available to solve the project in the most efficient manner
  • Ansys on the Azure Cloud – made HPC extremely easy to access and to scale up cores/RAM to solve models fast

These were just a few of the key advances in the software that allowed users to speed up their simulations time… yet users were not switching to HFSS 3D Layout. 6 years later, in June of 2020, due to competitive claims to HFSS, our customers came to us stating, “I hear that other tools can run faster than HFSS… how can you make HFSS run faster?” Like a broken record I told them about HFSS 3D Layout, the Phi Mesher, Distributed Direct Solver, Auto HPC, and Ansys on the Azure Cloud. Those competitive claims was just what we needed to get our customers to see the light! HFSS 3D Layout was poised to be the solution that they were looking for, and for every benchmark ran, it reduced their solutions times from 2X to 20X in some cases. 18 months later, our customers have fully embraced HFSS 3D Layout for their chip/package/board workflows, and I see they are benefiting in 2 ways… some are able to solve their critical nets way faster than before, and are able to give pass/fail metrics to their designs teams a lot sooner. This allows these teams to get their products to the market faster. The other benefit I see is that customers are solving 2X to 4X the amount of nets that they would have solved before. The simulations times are still reasonable for the increased amount of nets, and customers are able to characterize reliability concerns like cross-talk and cross-coupling to ensure robust designs. Being able to solve larger portions of the designs in one model allows customers to reduce failures after production because they have modeled more of the design under real-world conditions.

With the incredible success our customers have had with HFSS 3D Layout, there is still one issue that I see still today with their models… and that is ‘cutouts’. Customers are still trying to make the model as small as possible to reduce the amount of RAM it needs to solve. I want to stop this practice, but I know it is engrained in our user base… and I know why they do it… ME! Yes, this was a common practice 5-10 years ago when the overwhelming majority of our customers were running projects on one machine. This machine may have had 250GB if they were lucky, and making sure the simulation was able to mesh, adaptively refine and complete a frequency sweep, all within that RAM footprint, was critical. So Ansys’ AE staff back then would teach customers how to create cutouts that would minimize size of the model, and in turn, minimize the RAM footprint. Sometimes that practice would cause accuracy issues because the cutout would be too close to the traces and adversely effect the return path, and/or introduce false return paths. There were 2 common ways to cut the model: using a conformal cut that followed the path of the traces (which often caused rounded edges) or manually creating a polygon to closely follow the traces. We would also teach customers to methodically go through the design and remove any object that was not electrically important to the model, Objects like vias, pads, and thru-holes were all manually removed to get rid of those unnecessary mesh elements. This type of cleanup could take hours to perform.

So, I am happy to announce that with 2021R2 HFSS 3D Layout, with the before mentioned key advancements in the software, we can abandon those practices. Our suggestion is to just use a rectangular cut well enough away from the traces. No more conformal cuts, no more polygons, and in general… no need to perform excessive cleanup. Why should you use just a rectangular cutout? For one, hardware has dramatically improved. Many customers have access to on-prem hardware that is well above the previous standard of 250GB of RAM. They have been able to string multiple machines together to allow larger simulations to run without issue. We have also seen adoption of the Ansys Cloud which has given customers access to cores/RAM needed to solve their biggest projects. Secondly, rectangular cuts tend to help the mesher reduce unnecessary edges and vertices that were introduced in the old cutout methodologies.

Need more convincing? Below, I show a quick comparison of the old methodologies used for cutting out a model, versus the 2021R2 methodology of using a rectangular cut.

  • The Conformal Cutout – The outer airbox follows the path of the traces which creates many rounded edges and vertices

 

  • The Bounding Box – Uses a polygon to cut through the model, but causes excessive dielectric regions which didn’t match with the real-world operation
  • The Rectangular Cut – The outline is far away from the traces, and ensures the natural return path is preserved

Key takeaways from the simulation results

  • Creating the smaller models didn’t solve faster
    • Conformal – 54mins; Bounding_Box – 52mins; Rectangular Cutout – 47mins
    • The total solution time for the adaptive convergence was fastest with the rectangular cutout. This largely has to do with the convergence taking only 9 passes rather than 10.
  • The smaller models had less initial/final mesh elements
    • This is true, but I wanted to show that a larger cut doesn’t mean that it will take longer to solve. As you can see, the rectangular cut produced the smallest RAM footprint. The rectangular cut allows the mesher to focus the mesh refinement on fields around the traces, and not on the edge boundary conditions.
    • Conformal – 79GB; Bounding_Box –66GB; Rectangular Cutout – 63.3GB
  • Take the guess work out of the simulation
    • Making a large rectangular cutout allows the users to avoid making cutouts to small that may affect the return paths.
    • It reduces the need to clean up the model

Summary

  • Use the latest release, which is currently 2021R2
  • Use HFSS 3D Layout for planar designs like IC, packages, and boards
  • Use rectangular cutouts when cutting models down from their original size
  • Contact an Ansys AE if you need help with setting up any of our models!

& cut out the cut outs!

Also Read

Is Ansys Reviving the Collaborative Business Model in EDA?

A Practical Approach to Better Thermal Analysis for Chip and Package

Ansys CEO Ajei Gopal’s Keynote on 3D-IC at Samsung SAFE Forum


More Than Moore and Charting the Path Beyond 3nm

More Than Moore and Charting the Path Beyond 3nm
by Kalar Rajendiran on 12-22-2021 at 10:00 am

Cadence AIML Technologies

The incredible growth that the semiconductor industry has enjoyed over the last several decades is attributed to Moore’s Law. While no one argues that point, there is also industry wide acknowledgment that Moore’s Law started slowing down around the 7nm process node. While die-size reductions still scale, performance jumps and power reductions aren’t scaling as they used to. At the same time, die sizes for designs have been increasing at an unsustainable rate, reaching close to current reticle size limit. This has introduced a myriad of issues to tackle. The industry as a whole has been working on various ways to overcome the hurdles. A lot has been written up about various solutions being pursued to address specific aspects.

Something that I haven’t often come across is a treatise of Moore’s Law era and what is needed for the next era. One such presentation was made at the recently concluded DAC 2021. The talk was given by Michael Jackson, Ph.D. Corporate VP, Research and Development at Cadence. The semiconductor market couldn’t have developed to even a fraction of its size today without the electronic design automation (EDA) industry. Michael takes us through his view of how EDA enabled Moore’s Law and the changes happening to EDA as driven by AI/ML. The last part of his presentation covers the integration changes needed to drive the continued growth of the semiconductor industry. The following is a synthesis of what I garnered from his talk titled “More Than Moore and Charting the Path Beyond 3nm.” You can listen to Michael’s entire talk from the TechTalks track of DAC 2021 virtual sessions.

Three ways EDA has fundamentally enabled Moore’s Law

Process technology advances is an obvious enabler of Moore’s Law as it is intrinsic to it. Another, not so intrinsic, nonetheless a fundamental enabler of Moore’s Law is EDA. The three ways EDA has enabled Moore’s Law are design methodology, EDA tool turnaround time (TAT) and process technology enablement.

Design Methodology

EDA has advanced from polygon pushing to transistor-level to cell-based to IP re-use design methodology development and support. At every step of this progress, EDA has delivered an average 10x productivity boost.

EDA Tool TAT

If tool run time could be reduced in half (say from 8 hours to 4 hours), that translates to a huge benefit for a designer. Since the early 2000s, EDA industry began focusing more on such core values and less on features for features sake. Inspired by Moore’s Law, tool TAT improvement became a major focus for each release of tools within the EDA industry.

Michael shares examples of systematic runtime performance improvements release over release. Synthesis products runtime improvements of 1.5x with every release as measured statistically over suite runs versus over just a few select designs. Emulation capacity increase of more than 10,000-fold over the last 30 years. Spectre® X simulator’s 10x speed improvement over Spectre APS while maintaining Spectre golden accuracy standards.

Process Technology Enablement

Process technology advances impact EDA tools with hard and soft requirements that must be addressed. Hard requirements are changes to each process technology node that must be handled by EDA tools. Examples of hard requirements are changes such as double-patterning, special Via support, DRC rule enablement and extraction enablement. Place and Route tool is very highly dependent on process technology driven hard requirements. At the other end of the spectrum are RTL simulation tools with very low dependency on process technology. And then there are soft requirements such as accuracy improvements to enable improved analysis and optimizations at each process node. Low-voltage accuracy and aging analysis are examples of soft requirements that are process node dependent.

ML-enabled EDA is the next Big Thing

EDA is full of NP hard/NP complete problems that are non-trivial to solve and require exponential run times. Because of this, overdesign and margin inefficiencies are traditionally built into designs to save on run times. Machine learning’s robust, rapid pattern-matching framework can reduce overdesign and margin inefficiencies.

ML-Based EDA can

  • help change design methodology as well as help improve run times of EDA tools
  • improve PPA results

Cadence’s ML-enabled EDA tools and capabilities span a wide spectrum of functional areas. Refer to Figure below.

Cadence Cerebrus™ digital implementation full flow, for example, delivers PPA and runtime improvements and frees engineering resources to work on more designs. This has been covered in detail in an earlier post. Michael provides a number of examples of improvements achieved through ML-enabled EDA.

Solution requirements needed to support the More than Moore Era

The slowdown of Moore’s Law has accelerated the growth of complex system designs leading to heterogeneous system integration. This era is termed as the More than Moore era. Just as Moore’s Law era was enabled by EDA, so will the More than Moore era in the form of 3D design methodology, 3D EDA tool TAT improvements, and 3D process technology enablement. Today’s complex systems call for integrating digital, analog, RF, sensors, passives and fluidics in 3D ICs, and on PCBs.

Cadence has been investing in multi-chip(let) packaging for a long time. When dealing with 3D-IC requirements, already complex and time-consuming tasks may take an even larger scale. For example, let’s consider static time analysis (STA) and the number of corners for a signoff. When going from a single die implementation to a chiplet implementation, the number of corners for signoff could increase 10x-100x depending on the design. Cadence’s Rapid, Automated Inter-Die (RAID) analysis significantly reduces STA corner data and TAT. Cadence has also developed and incorporated other capabilities into Tempus for enhancing efficiencies for 3D-ICs. In order to be able to avoid costly overdesign of individual dies and packages that make up a 3D-IC, a fully integrated platform is needed. A platform that integrates die implementation, package design, power, thermal and timing analysis and DRC/LVS check, all operating on an multi-technology common database.

Also Read

Topics for Innovation in Verification

Learning-Based Power Modeling. Innovation in Verification

Battery Sipping HiFi DSP Offers Always-On Sensor Fusion


DAC 2021 – Siemens EDA talks about using the Cloud

DAC 2021 – Siemens EDA talks about using the Cloud
by Daniel Payne on 12-21-2021 at 10:00 am

Craig Johnson

My third event at DAC on Monday was all about using EDA tools in the Cloud, and so I listened to Craig Johnson, VP EDA Cloud Solutions, Siemens EDA. Early in the day I heard from Joe Sawicki, Siemens EDA, on the topic of Digitalization.

Craig Johnson, Siemens EDA

Why even use the Cloud for EDA? That’s a fair question to ask, and Craig had several high-level answers:

  • Increased throughput
  • Higher capacity and availability
  • VMs tailored to specific workloads for maximum compute efficiency
  • Enables multi-party collaboration
  • Provides a global scale and consistency
  • More services are available
  • Better testing and optimization of tools and flows

Siemens EDA support the major three cloud vendors: AWS, Azure, Google. Mr. Johnson shared that engineering teams come up to speed with cloud-based tool flows through a process of: starting out with deployment planning resources, reading technical papers, watching presentations, finding application notes, making their own checklists, creating deployment guides, receiving AE assistance, using templates, deploying EDA tools, and re-using cloud-specific scripts.

Several specific EDA tools were mentioned from Siemens EDA, like:

Calibre nmDRC

Design groups can use cloud-based tools in a self-managed environment, or have Siemens manage the environment for them. Craig showed that there are four ways to use could-based EDA tools: a managed cloud from Siemens, cloud connected as an extension to on premise cloud, cloud native for full or partial tool flows, and finally, Velocity cloud which is using the Veloce emulator in the cloud.

Cloud Offerings

With the managed cloud offering from Siemens, they will configure all of your software tools in the cloud, provide CAD support, and share reference designs to get you started most quickly. This approach keeps your engineering headcount lower, by using the cloud as a service. Data traceability is including, so you’ll always know who uses a particular tool and what designs they have run through each tool. VPN technology gives your engineers a remote desktop to run each of the EDA tools in the managed cloud.

For a cloud connected tool flow, you start with on premise compute, then add on cloud services as needed, depending on the workloads. Peak loads can be done in the connected cloud.

The Cloud native flow has all of your EDA tools in the cloud, along with all design data, tool results, log files, PDK files, semiconductor IP, tests, etc. One application is for IC companies to showcase their new chips by providing a virtual evaluation board, in the cloud, instead of manufacturing a board and then shipping it out for evaluations. Engineers could evaluate the new chip as mounted on a virtual evaluation board, apply stimulus, make measurements, even run their own firmware or software.

Buying a hardware emulator is expensive, so offering an emulator in the cloud makes a lot of economic sense if your team just needs to run some software on a new SoC before silicon is ready. Emulation as a service is an emerging market and can be quite attractive for first-time emulation users.

In summary, doing EDA in the cloud makes sense because of the speed benefits, and with the cloud you can do simulations, verifications, virtual boards and even emulation. Most of these tasks are not as feasible with on premise infrastructure.

Related Blogs


Topics for Innovation in Verification

Topics for Innovation in Verification
by Bernard Murphy on 12-21-2021 at 6:00 am

signpost min

Paul, Raúl and I are having fun with our Innovation in Verification series, and you seem to be also, judging by the hit rates we’re getting. We track these carefully to judge what you find most interesting and what seems to fall more under the category of “Meh”. Paul and others also get informal feedback in client meetings but it would be great if we could get active feedback from you, the readers, on what topics would most interest you. We’d like to tune our picks to your preferences.

For example, we’re planning an upcoming review on a paper on dynamic coherency testing because support is strong from multiple directions that verification teams want more input here. In that spirit, I have a few questions for you. I’m looking forward to your feedback. Quick comments or carefully considered, voluminous responses are fine. We’ll use your feedback as an input to our future topics for Innovation in Verification.

Application papers versus academic papers

We have tended to pick academic papers since these are, at least in principle, most likely to aim at breakthroughs. Applications are no less worthy but more aimed at very targeted in-house optimizations; apps to simplify or improve a specific verification objective. If tied to specific vendor tools that may also limit broad interest.

Topic areas

We’ve looked at most areas in verification. At the block level there’s always opportunity to improve coverage, also how quickly we can get to coverage. System-level verification is wide open. Lots of opportunity to debate subsystem testing, coverage, how best to define tests at the system level, the relative merits of synthetic versus real-life tests. Then there are the non-functional KPIs: performance, power, security, safety, especially as architectures for managing security and safety continue to evolve.

Post-silicon debug is clearly topical, reflecting limitations in how well (or not so well) we are able to limit escapes in pre-silicon verification. Optimizing the total verification flow, beyond individual run performance, is also picking up. This is in part in reducing total regression times through learning optimization. Even more broadly, many readers are experimenting with Agile methods, integrating with design processes for continuous integration and deployment (CI/CD) flows.

We could also cover more in some areas we have neglected: mixed signal verification, ML hardware verification and virtual modeling are examples.

Vertical extensions

Vertical validation is becoming increasingly important. In automotive, aerospace, the IoT, HPC, medical and many other domains, system objectives are moving much closer to silicon. As a result, completing a test plan needs to comprehend verification objectives and also system validation objectives. One indication is the growing importance of requirements traceability. This is from high-level design down inside the software and silicon. While looking for papers on traditional verification topics, I’ve also come across related papers. These are on system-level validation for robotics and other autonomous applications, suggesting trends towards these cross-domain validation problems.

This applies particularly for example to sensing and sensor fusion. The front-end here is obviously AMS, though there can be significant digital content to control calibration. Fusion is important, especially in safety-critical systems. This requires close interaction between hardware and software to ensure real-time reaction to changes.

Lots of opportunities to explore existing domains more deeply and add new domains to explore. Please let me know what you think, either as a comment or email me directly (info@findthestory.net)

Also Read

Learning-Based Power Modeling. Innovation in Verification

Battery Sipping HiFi DSP Offers Always-On Sensor Fusion

Memory Consistency Checks at RTL. Innovation in Verification


DAC 2021 Wrap-up – S2C turns more than a few heads

DAC 2021 Wrap-up – S2C turns more than a few heads
by Ron Green on 12-20-2021 at 10:00 am

IMG 7547
SemiWiki Founder Daniel Nenni and S2C Cofounder Mon-Ren Chen

Now that the 58th Design Automation Conference held this year in San Francisco has concluded, we take a minute to look back at the results and ascertain what it meant for our company.

Unfortunately, many popular tradeshows held in the time of Covid have suffered a drop in attendance, and DAC was no exception. Despite this however, S2C is pleased to report the quality of visitors to our booth was quite high. In contrast to several other vendors, we chose to exhibit and demonstrate our latest hardware and software offerings on the show floor, giving customers the chance to examine our products live and close-up.

High-performance prototyping stands at the crossroads of two powerful trends: the increasing size and complexity of SoCs, coupled with the need to validate systems at-speed on real hardware. S2C is ideally positioned to capitalize on these trends. DAC gave us the opportunity to demonstrate how we can satisfy designer’s needs, and helped us by generating good interest, good questions, and good leads.

On the show floor we displayed a number of our latest prototyping products, including the Prodigy Logic System 10M based on the industry’s largest FPGA, Intel’s Stratix 10 GX 10M. Also on display were our Xilinx-based systems, the Prodigy Logic S7-19P, and S7-9P, both getting their fair share of attention.

But without question, the highlight of our booth was our new Prodigy Logic Module LX2. Built around Xilinx’s largest Virtex Ultrascale+ device, the LX2 houses eight VU19P FPGAs, producing a machine of unrivaled speed and capacity. Furthermore, the LX2 architecture provides for interconnecting up to 8 LX2 units, offering the breathtaking capacity of 64 FPGAs – a true heavy-lifting machine. Several customers commented how impressed they were with the system’s specs and capabilities. In the world of high-performance prototyping, the LX2 looks like the one to beat.

But what good is a high-performance prototype if you can’t perform debug? This was a question on the minds of many. We addressed that issue by showing the MDM Pro, our multi-FPGA debug module that is compatible with all our prototyping platforms. The MDM Pro captures the event data generated by long-running events, allowing the on-board FPGA memory to be preserved for your design needs. When we pointed out the MDM Pro module comes as a built-in part of both the 10MQ and S7-19PQ systems, there were several nods of approval.

One skeptic however, remained unconvinced. “Nice hardware,” he was heard to say. “But you guys got any software to go with this stuff?”

Absolutely. If 18 years in the business has taught us anything, it’s that productivity software is critical to the prototyping effort. We were able to demonstrate our premiere software offering PlayerPro, which itself is comprised of several modules: Compile, for partitioning, downloading, and configuring a prototype; Runtime, a module for dynamic control of your prototype; and Debug, to configure and work with the MDM Pro hardware.

Also demonstrated was ProtoBridge: an application that supports 4GB/s data transfers via a C API to an AXI4 bus driver. This tool enables high bandwidth data transfers – such as video – between a PC and an FPGA.

To round out our offerings, we displayed a portion of our Prototype Ready IP Library: a rich collection of plug-and-play daughter cards that include memory and interfaces to speed your prototype development. During the course of the show, we responded to inquiries from customers working in fields as diverse as Storage, AI, Networking, and Automotive. There was a fair amount of interest from universities as well.

More than one customer asked about system availability. We were pleased to give the answer everyone wanted to hear: these systems are ready and available now. Your Christmas presents may be stuck in the back of Santa’s Workshop, but not S2C! Our products are in stock and ready to ship with short lead times and fast deliveries!

Overall, DAC was a successful show for us, helping to give our products visibility in the market, and setting the stage for next year. DAC is a unique and useful event, and we’ll definitely be back – at DAC!

About S2C

S2C, is a global leader of FPGA prototyping solutions for today’s innovative SoC/ASIC designs. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 500 customers and more than 3,000 systems installed, our highly qualified engineering team and customer-centric sales team understands our users’ SoC development needs. S2C has offices and sales representatives in the US, Europe, Israel, China, Korea, Japan, and Taiwan.

For more information, please visit www.s2ceda.com

Also Read:

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

S2C EDA Delivers on Plan to Scale-Up FPGA Prototyping Platforms to Billions of Gates

S2C’s FPGA Prototyping Solutions


Bringing PCIe Gen 6 Devices to Market

Bringing PCIe Gen 6 Devices to Market
by Daniel Nenni on 12-20-2021 at 6:00 am

Truechip PCIe Gen 6

PCIe is a prevalent and popular interface standard that is used in just about every digital electronic system. It is used widely in SOCs and in devices that connect to them. Since it was first released in 2003, it has evolved to keep up with rapidly accelerating needs for high speed data transfers. Each version has doubled in throughput, with updates coming every few years – except for the notable gap between version 3.0 and 4.0. PCI Gen 6 is expected to be have its final release in 2021.

PCIe Gen 6 supports 126 GB/s in each direction when using 16 lanes. The individual lane speed will be 7.87GB/s. Many changes were made in the specification to achieve these data rates. Most significant of these is the change to PAM-4 (pulse amplitude modulation with four levels) and the addition of ECC. Numerous other changes were made to the protocol as well. As is always the case, PCIe Gen 6 interfaces will be backward compatible with earlier versions to ensure interoperability. All of this is good news to system designers in need of higher bandwidth and flexibility.

However, these changes mean that designing and verifying complete and correct functionality has become even more difficult. Lots of system designers will choose to use IP blocks to help implement PCIe Gen 6 in their designs. Whether or not the interface controller and PHY are developed in house or outsourced, complete verification is a necessity.

Developing a test suite takes a level of effort on par with or greater than developing the PCIe IP itself. Fortunately, Truechip, a developer of verification IP(VIP), offers a complete test suite and verification environment for PCIe Gen 6. Their VIP is fully compliant with the latest PCIe Gen 6 specifications. It is built, using years of experience, to be light weight, with an easy plug-and-play interface to ensure rapid deployment.

Their PCIe testbench includes agents for the Root Complex and the Device Endpoint. They each come with bus functional models for the TL, DL and PHY layers. In addition, there is a PCIe Bus Monitor which performs many useful operations. It supports assertions, coverage, as well as checkers for the TL, DL and PHY. All of this is connected to a scoreboard to help monitor test results.

The test bench is backward compatible with all of the relevant earlier specifications. It supports precoding for 32GT/s and 64GT/s, PAM-4 signaling, Flit and non-FLIT mode and the new PIPE 6.0 specification. It can be configured to support from x1 to x16 link widths. All low power management states, including the new L0p state are available. The list of features in the documentation and data sheet is comprehensive and supports every feature in the specification.

To ensure comprehensive validation the test environment and test suite provide a wide range of tests. User can run basic and directed protocols tests. There are also random tests and error scenario tests. Truechip includes assertions and cover point tests. Lastly there are compliance tests, to ensure the finish product will work smoothly with other PCIe Gen 6 devices. There is a full set of documentation that goes through the integration process and can be used as a reference guide during use.

The time frame for bringing PCIe Gen 6 devices to market is fast approaching. Truechip has already had customer deliveries for this VIP product. Having ready to go VIP can make a big positive impact on the development and testing schedules for products that rely on PCIe Gen6. With PCIe playing such a large role in SOCs and device operation, it is crucial to support the latest standard and be able to offer the highest interoperability, quality and reliability. Truechip offers much more information about their PCIe Gen 6 VIP on their website. If you are developing products that rely on PCIe Gen 6, it might be worth a look.

Also read:

PCIe Gen 6 Verification IP Speeds Up Chip Development

USB4 Makes Interfacing Easy, But is Hard to Implement

TrueChip CXL Verification IP

 


Pattern Shifts Induced by Dipole-Illuminated EUV Masks

Pattern Shifts Induced by Dipole-Illuminated EUV Masks
by Fred Chen on 12-19-2021 at 10:00 am

Pattern Shifts Induced by Dipole Illuminated EUV Masks

As EUV lithography is being targeted towards pitches of 30 nm or less, fundamental differences from conventional DUV lithography become more and more obvious. A big difference is in the mask use. Unlike other photolithography masks, EUV masks are absorber patterns on a reflective multilayer rather than a transparent substrate. Most articles on EUV lithography do not go into the details that SPIE papers do [1,2]. Figure 1 shows the fundamentally different aspects of EUV masks.

Figure 1. An EUV mask differs from an ideal mask in that the absorbers partly transmit EUV light into the multilayer substrate, which then reflects the light back through the absorbers for a second pass.

EUV masks are essentially like attenuated phase shift masks, where the phase shift is very different from the ideal 180 degrees. In fact, the phase shift depends on the illumination angle as well as the absorber thickness, and a phase shift comes from propagating through the multilayer as well. Since all illumination is from one side, shadowing is a natural consequence as well [1].

For the tighter pitches, dipole illumination is used. For the case of EUV mask illumination within the plane of incidence, this means one illumination angle will be larger than the other. This results in the image from one angle being dimmer and shifted in phase, i.e., position, relative to the other (Figure 2). For this image calculation, the absorber was assumed to be 60 nm thick, with an optical constant of 0.94+0.04i, and the multilayer reflectance at 13.5 nm wavelength was obtained from the CXRO database [3].

Figure 2. A dipole illumination tuned for 30 nm pitch would produce a symmetric image for an ideal mask, but not so for an EUV mask.

It’s apparent that the image is displaced by an amount that depends on the illumination angle. Figure 3 shows that dipoles spaced apart with different distances from the pupil center produce different shifts. The closer the two poles are to the center, the less asymmetry there is, as the illumination angles differ by less.

Figure 3. Different dipole illumination positions produce different EUV image shifts. The closer to the center, the less disparity between the images of the two pole angles, and therefore the less asymmetric the image.

The pattern shifts are more severe for tighter pitches. Therefore, it should not be a surprise to expect growing consideration, for example, of different absorbers in “next-generation” EUV masks.

References

[1] S. Sherwin et al., “Advanced multilayer design to mitigate EUV shadowing,” Proc. SPIE 10957, 1095715 (2019).

[2] E. van Setten et al., “Multilayer optimization for high-NA EUV mask3D suppression,” Proc. SPIE 11517, 115170Y (2020).

[3] https://henke.lbl.gov/optical_constants/multi2.html

Related Lithography Posts


“Too Big To Fail Two” – Could chip failure take down tech & entire economy?

“Too Big To Fail Two” – Could chip failure take down tech & entire economy?
by Robert Maire on 12-19-2021 at 6:00 am

Semiconductors too big to fail

-Chips enable tech sector which underpins entire economy
-Is the US chip sector “Too Big To Fail”?
-If US chip industry fails, does tech & everything else follow?
-How chip/Taiwan crisis compares to 2008 financial meltdown

It was the best of times, it was the worst of times

We find it an incredible juxtaposition that we are experiencing the greatest strength and growth that the semiconductor industry has ever seen yet the threat to the largest participants and the world order of the semiconductor industry is nothing less than existential.

Semiconductor demand and strength is off the charts yet Taiwan/TSMC is literally under the gun and Intel needs a string of “Hail Mary” plays to get back in the technology race which defines success in the industry.

Things could look incredibly different in a short period of time. We are on the precipice of potential extreme change.

We may have seen a similar movie before in the overheated financial sector that led up to the 2008 financial crisis that could have had a cataclysmic ending if it were not for some strong, last ditch intervention. This is not to suggest that the sub prime mortgage industry and current chip demand are similar, one was almost fraudulent and the other is real demand. The only parallel we draw is that strong intervention may be needed to avert a potentially much larger problem. The risks that the semiconductor industry faces are both self inflicted as well as external.

Intervention may also take other forms than just pure financial assistance as these risks are varied.

Is the US semiconductor industry Too Big To Fail?

What could happen if Intel fails to get back into the technology race? What if TSMC remains the only leading edge foundry for fabless US chip companies such as Nvidia, Qualcomm and AMD? Can Micron keep up in the memory industry in the face of a torrent of spending in Asia?

Obviously hundreds of billions of dollars of semiconductor revenue are at risk for US based companies but perhaps much more importantly trillions of dollars of goods that rely on the semiconductor industry for the very heart of their products. From the auto industry to defense to communications, the cloud, mobile phones and well beyond.

Cutting out the heart of the stock market and years of a rally

Lets just think about what the stock market would have looked like and what it would look like in the future without the semiconductor industry. as it is today.

In case you have been living under a rock with your money stuffed in a mattress, the main driver of the stock market has been tech stocks. Yes, other sectors have done well but tech, and especially semiconductors, has been at the heart of strength in the market and creating much of the momentum.
The stock market and with it, many investors net worth would look a lot different.

Semiconductors are inside and critical to so many industries it somewhat reminds us of how AIG was ingrained in the very fabric of the financial industry in many products and companies that people did not understand until the risk of their failure exposed just how deeply they were ingrained. But it took a $180B government bailout to rescue AIG from taking the whole financial sector down with it.

The chip shortage that has stopped cars from shipping is just the beginning and tip of the iceberg that goes much further and deeper into the economy with much greater risk.

So the question is would the failure of the US semiconductor industry do less or more damage to the US economy than if AIG had failed? Maybe its also worth a $180B investment and not just a small $52B Chips for America, which is a relative drop in the bucket.

Maybe its not just Too Big to Fail but perhaps Too Critical to Fail as well…..

The risks are both internal and external

Given the international nature of the semiconductor industry and the US’s reliance on Taiwan and Korea the risk profile is much more complex and less controllable as it is not contained within our borders or jurisdiction.

The risks are also less measurable and more subject to “Black Swan” events that are not well defined nor easy to protect against.

An example:

President Xi gets impatient in his quest to reunite Taiwan with the mainland and is further aggravated by the US denying him semiconductor technology. He decides that if he can’t have Taiwan and its chips that the US can’t have it either and launches one conventional low yield missile into TSMC’s leading fab that produces chips for Apple, Intel, AMD, Nvidia and Qualcomm etc… putting it out of commission.

Very few people would die or be injured, it would not start a war, but the stock market would implode and the US tech industry would fall apart. A similar threat exists in Korea as Samsung fabs are within artillery, not even missile, range of Kim Jong Un who is clearly less stable.

These risks while low are more than zero

Internal threats are more similar to 2008’s financial crisis in that they were self inflicted, either not paying attention or failure to execute or similar. The semiconductor industry requires laser like focus, copious spending and a long term view that is measured in years and not quarterly results. Developing and maintaining the talent pool is a very long term effort that is key to the industry’s success.

Intervention & protection is both financial and systemic

The semiconductor industry needs both financial help, to build many new fabs in the US as well as the surrounding infrastructure needed but it also needs proper political and governmental support to foster the industry, protect it and incentivise it.

While the Chips for America act is a good start, it is only a down payment and without additional terms and guardrails it could potentially be much less effective.

Chips for America needs a parallel bill that sets up the proper environment and infrastructure to foster the industry in the US.

The financial bailout needs to come with clear terms and ownership positions to insure it is properly spent and taxpayers get a return on their investment much as what happened in the case of AIG.

There also needs to be some triage and prioritization of resources such that more critical companies in the semiconductor industry get more attention much as Lehman was not on the priority list while AIG was. We would suggest strong focus on leading edge and all the associated enablers…..

Don’t get fooled by the current good times

We also think that there may be some who question putting money and effort into an industry that is currently in “party mode” with their stocks at record highs in record time with more business and profits than they can handle.
This will not last forever. There could be a soft or hard landing but there will be a landing at some point. Supply almost always catches up with demand.
Part of the need for action is to protect the industry when things aren’t as good as they are now.

When the shortages are over the issues risk being forgotten

Only over the last year has the general public and the political public gotten a very small inkling of the semiconductor industry and only through secondary means such as the shortage of cars or other shortage related issues.
When the shortages are over, we risk being forgotten about again as the general public focuses on the new topic du jour.

Even though semiconductors are both ubiquitous, pervasive and critical they are none the less “invisible” in our daily lives and thus easily forgotten unless a problem happens.

Its hard to buy insurance or care about a potential problem you can’t even remember. The semiconductor industry spent many years in obscurity and could easily return.

The stocks

While many of the risks and issues are low probability we would still pay attention to exposure that our portfolio would have to some of these events in the semiconductor industry that could snowball into much larger problems for tech and the general economy.

Many investors I speak to do not immediately grasp the direct connection between Taiwan/China and the greater tech industry and global economy. How some small events there could create larger ripples through other sectors.
This “Butterfly Effect” that the semiconductor industry has is not fully recognized nor understood. Investors would be well served to look at these interrelationships and dependencies. its not just autos.

Spending time and money to help the industry is cheap insurance relative to the percentage of the US and global economy impacted by semiconductors. Spending tens of billions to avert trillions of risk.

We would also try to predict which semiconductor related companies would benefit most and in what ways from potential assistance efforts….and just as importantly who would lose out or be negatively impacted from those efforts.
At the top of our list fo Too Big to Fail (or perhaps Too Critical to Fail) would certainly be Intel and Micron. All the equipment companies who hold the manufacturing know how, such as Applied Materials, KLAC, Lam, and foreigners such as ASML and TEL. EDA companies and some material companies. While these companies are certainly not at risk right now they are none the less critical to the industry and its health.

While TSMC and Samsung are certainly highly critical, they are most critical for their fabs to be built in the US that are within the safety of our borders as insurance for our tech and greater economy that currently rely on semiconductors from less stable regions.

The semiconductor industry is truly Too Big to Fail even though its products are too small to be seen and hidden in plain sight.

Also Read:

Supply Chain Breaks Under Strain Causes Miss, Weak Guide, Repairs Needed

Semicon West is Semicon Less

KLAC- Foundry/Logic Drives Outperformance- No Supply Chain Woes- Nice Beat


Podcast EP53: Breker’s New CEO Weighs in on the Company, DAC and the Future of Verification

Podcast EP53: Breker’s New CEO Weighs in on the Company, DAC and the Future of Verification
by Daniel Nenni on 12-17-2021 at 10:00 am

Dan is joined by Dave Kelf, who was recently appointed CEO of Breker Verification Systems. Dave discusses Breker’s unique approach to verification of complex systems, what its future impact will be and what Breker will be doing at DAC.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


COVID Still Impacting Electronics

COVID Still Impacting Electronics
by Bill Jewell on 12-17-2021 at 6:00 am

Electronics Production 2020 2021

Electronics production has been volatile over the past two years primarily due to the COVID-19 pandemic. Electronics production three-month-average change versus a year ago is shown below for key Asian countries. COVID-19 shutdowns affected production in early 2020. Trends in 2021 show a strong bounce back.

The key trends by country are:

South Korea – electronics production was not significantly impacted by COVID, with March 2020 three-month-average production up 25% from a year earlier. Recent growth has been strong, 20% or higher since June 2021. South Korea avoided significant COVID slowdowns by emphasizing early detection, containment, and treatment.

China – the source country of the COVID virus imposed major shutdowns in early 2020, resulting in March 2020 production down 6% from a year earlier. Production recovered beginning in April 2020. Early 2021 showed a strong recovery with March 2021 up 36% from the weak period a year earlier. China’s production growth has stabilized in the 12% to 13% range since May 2021.

Taiwan – production was moderately affected by COVID shutdowns, with March 2020 up only 5% from a year earlier, in contrast to the 20% plus growth in most of 2019. Since April 2020, Taiwan production growth has been relatively stable in the range of 5% to 11%.

Japan – electronics production has been declining for several years primarily due to manufacturing shifting to lower wage countries. Growth turned positive in August 2019 before declining again in December 2019. Japan avoided significant COVID cases early in the pandemic, but a surge of cases in July and August 2020 led to some shutdowns and a production decline of 18% in September 2020. Production turned positive in February 2021, reaching a peak of 11% in June 2021. Since June, production has decelerated, reaching 0% change in October 2021.

Vietnam – electronic production has been on a strong growth trend in recent years primarily due to manufacturing shifts from China and South Korea. COVID related shutdowns led to a production decline of 12% in May 2020. Production quickly recovered reaching 25% growth in January 2021. Vietnam was held up as an example to the world when its strict containment measures led to relatively few COVID cases in 2020. However, Vietnam saw a sharp increase in COVID cases driven by the Delta variant beginning in July 2021. A shutdown from July 8 to October 1, 2021, in much of the south of Vietnam resulted in an electronic production decline of 11% in August 2021. The decline eased to 6% in November 2021.

The following chart shows electronics production three-month-average change versus a year ago for the United States (U.S.), United Kingdom (UK), and the 27 countries of the European Union (EU27).

The key trends are:

United States – electronics production was not significantly affected by COVID-19 as shutdowns of factories were isolated. U.S. production growth was weak in 2019, ranging from a 1% decline to a 2% increase. The weakness continued in the first half of 2020 before picking up to growth in the 6% to 9% range in August 2020 through August 2021. In September and October 2021 growth moderated to about 4%.

United Kingdom – production was generally weak in 2019, ranging from a 3% decline to 4% growth. The UK instituted a nationwide lockdown due to COVID beginning in March 2020 and easing up in May and June of 2020. Production declined by 19% from a year ago in May and June of 2020. Year-to-year growth did not turn positive until April 2021 and peaked at 13% growth in June 2021 compared to the weak June of 2020. Growth has been decelerating in the last several months, with October 2021 down 3% from a year ago. In addition to COVID, the UK has been dealing with the effects of Brexit (the UK withdrawal from the EU) which became official at the end of 2020.

European Union – countries had varied lockdown policies in early 2020, but the overall effect was a 6% decline in production in April and May of 2020 versus a year earlier. Production rebounded to a strong 24% growth in January 2021 and has remained in the 18% to 30% growth range since. EU electronics production has been a beneficiary of Brexit as some production previously done in the UK has now shifted to the EU. Also, the EU27 as a whole has been less impacted by COVID than the UK. According to Worldometer, the UK has 162 COVID-19 cases per 1,000 people, twice the rate of 80 in Germany, the largest EU manufacturer.

The impact of the COVID-19 is also reflected in the unit shipment data of two key electronic devices: PCs and smartphone. According to IDC, PC shipments fell 8% in 1Q 2020 versus a year earlier, primarily due to COVID related production shutdowns. In the next three quarters, PC shipments grew strongly, from a 14% increase in 2Q 2020 to 26% in 4Q 2020. 1Q 2021 was up 55% compared to the weak 1Q 2020. Demand for PCs was strong due to the pandemic. Shutdowns and other restrictions forced many people to work from home and many students to learn from home. The increase in electronic communication led many households to acquire or upgrade PCs. PC growth moderated to 4% in 3Q 2020 as much of the demand increase was satisfied. In addition, component shortages limited some PC production. This month, IDC projected PC shipments will increase 13.5% in 2021 and moderate to 0.3% growth in 2021.

Smartphone shipments were heavily hurt by the COVID pandemic in 1Q 2020 since most production is done in China, which shutdown most of its manufacturing in early 2020. IDC stated shipments were down versus a year ago by 12% in 1Q 2020 and down by 17% in 2Q 2020. Shipment growth recovered to 26% in 2Q 2021 and 13% in 2Q 2021. In 3Q 2021, shipments were down 7% from a year ago. IDC attributes the decline to component shortages and other logistical problems. IDC expects year 2021 smartphone growth will be 5.3%, slightly moderating to 3.0% in 2022.

The world and the electronics industry are still feeling major effects from the COVID-19 pandemic. Worldometer shows the world is currently in a fifth wave of the virus. However, the death rate from COVID-19 is declining due to vaccinations, better treatments, and improved control methods. Electronics production has been hurt by various production shutdowns, component shortages and logistical challenges. These issues will probably continue through most of 2022. By 2023, electronics production should be back to typical trends. I am not using the world normal since nothing will see normal again for several years.

Also Read:

2021 Finishing Strong with 2022 Moderating

Semiconductor CapEx too strong?

Auto Semiconductor Shortage Worsens