Ceva webinar AI Arch SEMI 800X100 250625

Can Attenuated Phase-Shift Masks Work For EUV?

Can Attenuated Phase-Shift Masks Work For EUV?
by Fred Chen on 04-18-2023 at 6:00 am

1679926948898

Normalized image log-slope (NILS) is probably the single most essential metric for describing lithographic image quality. It is defined as the slope of the log of intensity, multiplied by the linewidth [1], NILS = d(log I)/dx * w = w/I dI/dx.  Essentially, it gives the % change in width for a given % change in dose. This is particularly critical for EUV lithography, where stochastic variations in dose are naturally occurring.

A dark feature against a bright background has a higher NILS than a bright feature against a dark background. The reason is the intensity in the denominator is relatively much lower for the dark feature than the bright feature. For this case, the NILS is also made sufficiently high, e.g., > 2, with a sufficiently high mask bias, i.e., a dark feature size on the mask larger than 4x the targeted wafer dark feature size. However, if the dark feature on the mask is too large compared to the spacing between features, then there is too little light reaching the wafer. This means that a longer exposure time is needed to accumulate a sufficient number of photons absorbed. A way around this is to use an attenuated phase shift mask, or attPSM in abbreviation. The dark feature is actually partially transmitting light through the mask, and imparting a phase shift of 180 degrees. Both transmission (or reflectivity in the case of EUV) and phase are adjusted by the material and thickness of the dark feature on the mask.

Figure 1. The same NILS requires much longer exposure with a binary (T=0) mask than a 6% attPSM. This is based on a 4-beam image of dark island feature (width w, pitch p) in an expected quadrupole illumination scenario.

In Figure 1, we see that with the same NILS, the log-slope curves are similar in shape, but the attPSM with less bias than the binary mask allows more light to get to the wafer, so that a long exposure is not needed.

With the advantage of using an attPSM made clear, let’s turn now to why EUV hasn’t implemented any yet. A fundamental difference between an EUV mask and a DUV mask is that while there is only single pass of light through the DUV mask, the EUV mask has two passes of light through the pattern layer, and in between passes, the light propagates through a multilayer, which tends to absorb more light at higher angles [2].

Figure 2. While a DUV mask (left) is treated as a thin pattern layer, an EUV mask (right) is treated as two pattern layers separated by an absorbing layer, i.e., the multilayer.

Consequently, the phase shift (also no longer targeted at 180 degrees, but over 200 degrees [2]) is distributed over multiple layers, and not easily tailored by adjusting one layer’s thickness. Moreover, the known candidate materials are hard to process with good control. Materials like ruthenium and molybdenum easily oxidize. A few nanometers change of thickness can add tens of degrees of phase shift [3]. The different individual wavelengths within the 13.2-13.8 nm range also have significantly different phase shifts as well as reflectivities from the multilayer [4]. Regardless of these complicating factors, designing attPSMs for EUV continues to be a topic of ongoing investigation [5].

References

[1] C. A. Mack, “Using the Normalized Image Log-Slope,” The Lithography Expert, Microlithography World, Winter 2001: http://lithoguru.com/scientist/litho_tutor/TUTOR32%20(Winter%2001).pdf

[2] C. van Lare, F. Timmermans, J. Finders, “Mask-absorber optimization: the next phase,” J. Micro/Nanolith. MEMS MOEMS 19, 024401 (2020).

[3] I. Lee et al., “Thin Half-tone Phase Shift Mask Stack for Extreme Ultraviolet Lithography,” https://www.euvlitho.com/2011/P19.pdf

[4] A. Erdmann et al., “Simulation of polychromatic effects in high NA EUV lithography,” Proc. SPIE 11854,1185405 (2021).

[5] A. Erdmann, H. Mesilhy, P. Evanschitzky, “Attenuated phase shift masks: a wild card resolution enhancement for extreme ultraviolet lithography?,” J. Micro/Nanopattern. Mater. Metrol. 21, 020901 (2022).

This article first appeared in LinkedIn Pulse as Phase-Shifting Masks for NILS Improvement – A Handicap For EUV?

Also Read:

Lithography Resolution Limits: The Point Spread Function

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation

KLAC- Weak Guide-2023 will “drift down”-Not just memory weak, China & logic too


Podcast EP155: How User Experience design accelerates time-to-market and drives design wins

Podcast EP155: How User Experience design accelerates time-to-market and drives design wins
by Daniel Nenni on 04-17-2023 at 10:00 am

Dan is joined by Matt Genovese. Matt founded Planorama Design, a user experience design professional services company to design complex, technical software and systems that are simple and intuitive to use while reducing internal development and support costs. Staffed with seasoned engineers and user experience designers, the company is headquartered in Austin, Texas.

Matt explains how Planorama Design helps hardware companies create simple, intuitive user experiences for the software they ship with their products. Matt explains the process used and the substantial business benefits their customers are seeing.

You can also learn more at the LIVE WEBINAR Matt will conduct on April 25 at 9AM Pacific time, entitled The ROI of User Experience Design: Increase Sales and Minimize Costs. You can register for the webinar HERE.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI Assists PCB Designers

AI Assists PCB Designers
by Daniel Payne on 04-17-2023 at 6:00 am

PCB steps min

Generative AI is all the rage with systems like ChatGPT, Google Bard and DALL-E being introduced with great fanfare in the past year. The EDA industry has also been keen to adopt the trends of using AI techniques to assist IC engineers across many disciplines. Saugat Sen, Product Marketing at Cadence did a video call with me to explain what they’ve just announced, it’s called Allegro X AI. The history of Cadence includes both IC and Systems-design EDA tools, plus verification IP and design services.

By using generative AI-driven PCB design in Allegro X AI there are three big goals:

  • Better and Faster Hardware Design
  • Improved PCB Designer Productivity
  • PCB-design Driven by Physics engines

The typical PCB design flow has several sequential steps, where the most time-consuming parts are manual placement, followed by manual routing.

PCB Steps

With Allegro X AI the electrical engineer specifies design constraints, runs the tool; automating board placement, power and ground routing, plus critical signal routing.

Allegro X AI tool flow

Just like ADAS in the automotive world is bringing new automation levels to the driving experience, constraint-driven PCB design offers a shift-left time savings for electronic systems. Allegro X AI does not replace PCB designers, rather it makes the team of electrical engineer plus PCB designer more productive, saving valuable time to market.

To provide these new automation levels to the PCB design flow requires compute power found in the cloud, and one example of a PCB design that required 3 days for human-based placement, now takes only 75 minutes with Allegro X AI while producing a 14% better wire length metric.

Global Placement comparison

A second PCB example that Saugat showed me was for global placement and the automation results were impressive:

  • 20 minute runtime with Allegro X AI versus 3 human days
  • 0 opens, 0 DRCs, with 100% timing passes
  • Wire length improved by 3% using AI

In this new, AI-powered approach, the PCB designer still needs to complete detailed routing, but stay tuned for future improvements. With Allegro X AI an electrical engineer can quickly look at the feasibility of a PCB design, without using a layout designer resource. The learning curve for this new feature is quick, so expect to explore results on the very first day of use. Expect to use this technology on small to medium-sized PCB projects to start out with. The built-in engines for SI/PI analysis operate quickly, ensuring that your design meets timing and reliability constraints.

In the official press release you can read quotes from three companies that had early access to Allegro X AI:

Summary

PCB design is changing, and for the better by using AI techniques found in tools like Allegro X AI from Cadence. You can expect benefits like better and faster hardware design, as an electrical engineer can explore the PCB design space more quickly, even giving the PCB designer a layout starting point. PCB designers become more productive, as the placement and critical net routing becomes automated, freeing them up to complete the detailed routing task. Using constraints and built-in analytics for SI/PI is a physics-based approach, which helps produce a more optimal PCB design compared to fully manual Methods.

Cadence engineered all of this AI technology in-house, and you just need to contact your local account manager to get started with Allegro X AI.

Related Blogs


Podcast EP154: The Future of Custom Silicon – Views From Alphawave Semi’s Sudhir Mallya

Podcast EP154: The Future of Custom Silicon – Views From Alphawave Semi’s Sudhir Mallya
by Daniel Nenni on 04-14-2023 at 10:00 am

Dan is joined by Sudhir Mallya, Senior Vice President of Corporate Marketing at Alphawave Semi. He is based in Silicon Valley and has over 25 years of experience at leading global semiconductor companies with executive positions in engineering, marketing, and business unit management. His experience spans custom silicon and application-specific products across multiple domains including data centers, networking, storage, and edge-computing.

Dan explores the future of custom silicon with Sudhir. Key drivers, including data center, automotive and edge/IoT are discussed. The tradeoffs between custom and off-the-shelf design is also explored, along with the importance of interface standards and design challenges that lie ahead.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Multi-Die Systems: The Biggest Disruption in Computing for Years

Multi-Die Systems: The Biggest Disruption in Computing for Years
by Daniel Nenni on 04-14-2023 at 6:00 am

SNUG Panel 1

At the recent Synopsys Users Group Meeting (SNUG) I had the honor of leading a panel of experts on the topic of chiplets. The discussion was based on a report published by the MIT Technology Review Insights in cooperation with Synopsys. This is a very comprehensive report (12 pages) that is available online HERE.

Here is the preface of the MIT paper:

“Multi-die systems define the future of semiconductors” is an MIT Technology Review Insights report sponsored by Synopsys. The report was produced through interviews with technologists, industry analysts, and experts worldwide, as well as a cross-industry poll of executives. Stephanie Walden was the writer for this report, Teresa Elsey was the editor, and Nicola Crepaldi was the publisher. The research is editorially independent, and the views expressed are those of MIT Technology Review Insights. This report draws on a poll of the MIT Technology Review Global Insights Panel, as well as a series of interviews with experts specializing in the semiconductor industry and chip design and manufacturing. Interviews occurred between December 2022 and February 2023.

The focus of a panel like this is the participants. During our lunch prior to the panel I learned quite a bit:

Simon Burke, AMD Senior Fellow, has 25+ years experience building chips starting with HPC vendor Silicon Graphics. He them moved to AMD then to Xilinx and back to AMD through the acquisition. An amazing journey, both depth of knowledge and a great sense of humor. Simon and AMD are leaders in the chiplet race so there was a lot to be learned here.

John Lee, Head of Electronics, Semiconductors and Optics at Ansys, is a serial entrepreneur. I met John when Avant! bought his Signal Integrity company in 1994. Synopsys then bought Avant! In 2002 and John became R&D Director. John then co founded Mojave Design which was acquired by Magma in 2004. John left Magma after they were acquired by Synopsys and later founded Gear Design, a big data platform for chip design, which was acquired by Ansys in 2015.  John is one of my favorite panelists, he says it like it is.

Javier DeLaCruz has 25+ years of experience including a long stint with one of my favorite companies eSilicon. Javier works for Arm handling advanced packaging technology development and architecture adaption including 2.xD and 3D systems. Arm is everywhere so this is a big job.

Francois Piednol is Chief Architect at Mercedes Benz but prior to that he spent 20 years at Intel so he knows stuff. Francios is also a member of UCIe and a jet pilot. He actually owns a jet and as a pilot myself I could not be more impressed. Francios was part of the MIT Chiplets paper mentioned above as well so he is a great resource.

Dr. Henry Sheng, Group Director of R&D in the EDA Group at Synopsys, he currently leads engineering for 3DIC, advanced technology and visualization. He has over 25 years of R&D experience in EDA where he has led development across the spectrum of digital implementation, including placement, routing, optimization, timing, signal integrity and electromigration. He has previously led efforts on EDA enablement and collaborations for emerging silicon technologies nodes. Henry knows EDA.

Dan Kochpatcharin is the Head of Design Infrastructure Management Division at TSMC. Dan is a 30+ year semiconductor professional with 25 years at foundries. For the past 15 years Dan has been instrumental in the creation of the TSMC OIP. Today he leads the OIP Ecosystem Partnerships: 3DFabric Alliance, IP Alliance, EDA Alliance, DCA, Cloud Alliance, and VCA. Dan K, as we call him, knows the foundry business inside and out. I always talk to Dan whenever I can.

Here is the abstract for the panel:

The new era of multi-die systems is an exciting inflection point in the semiconductor industry. From high-performance and hyper-disaggregated compute systems to fully autonomous cars and ultra-high-definition vision systems; multi-die chip designs will transform computing possibilities, driving many new innovations, expanding existing markets and paving the way for new ones. Critical to fueling this momentum is the coherent convergence of innovations across the semiconductor industry by EDA, IP, chiplet, foundry and OSAT leaders. But what’s really happening inside the companies driving what can have one of the biggest impacts on system design and performance in a very long time?

Join this panel of industry leaders who are at the forefront of shaping the multi-die system era. Many have already made the move or are making key contributions to help designers achieve multi-die system success. Listen to their insights, their views on how multi-die system approaches are evolving, and what they see as best practice. Hear about the near, medium, and long-term future for multi-die innovation.

Here are the questions I asked:

Why Multi-Die System and Why Now?
  1. Mercedes: What is driving the change, and what is multi-die system offering you?
  2. AMD: How do you see the trend to multi-die at AMD and what is the key driver?
  3. Synopsys: Are we seeing other markets move in this direction?
  4. TSMC: How are you seeing the overall market developing?
It Takes a Village?
  1. Arm: How are companies like Arm viewing the multi-die opportunity and how does something like multi-die impact the day-to-day work for designers and system architects working with Arm?
  2. Ansys: how is the signoff flow evolving and what is being done to help mitigate the growing signoff complexity challenge?
  3. Synopsys: What other industry collaborations, IP, and methodologies are required to address the system-level complexity challenge?
It’s Just the Beginning?
  1. TSMC: Which technologies are driving the multi-die growth trend and how do you see these technologies evolving over time?
  2. AMD: When do you foresee true 3D – logic-on-logic – entering the arena for AMD and what kind of uplift would if offer compared to infinity fabric style connectivity solutions.
  3. Synopsys: How are the EDA design flows and the associated IP evolving and where do customers want to see them go?
Designing Multi-Die Systems?
  1. Mercedes: How is the multi-die design challenge being handled at Mercedes and is it evolving in lock-step – true HW/SW co-design – with these ongoing software advancements?
  2. AMD: What methodology advancements would you like to see across the industry to make system development more efficient? And what kind of impact does multi-die sytem design have on block designers?
  3. Ansys: How is the increased learning curve for these multi-physics effects being addressed?
  4. Arm: How is the Arm core design flow evolving to absorb these new degrees of freedom?
  5. Synopsys: how is EDA ensuring that designers can achieve the entitlement promised in their move to multi-die?
  6. TSMC: How is TSMC working with EDA and OSAT partners to simplify the move to multi-die design?
The Long Term?
  1. Mercedes: How is Mercedes approaching the long-term reliability challenge?
  2. TSMC: How is TSMC dealing with process reliability and longevity for these expanding use cases?
  3. Ansys: What is the customer view of the reliability challenge?
  4. Synopsys: Do you see multi-die system as a significant driver for this technology moving forward?

(I can cover the answers to these questions in a second blog)

Summary

The answers to most of the questions are covered in the MIT paper but here are a couple of points that I rang true to me:

Chiplets are truly all about the ecosystem. So many companies could be involved, especially for chip designers that are using commercial chiplets, so where is the accountability? Dan K. made a great point about working with TSMC because they are the ecosystem experts and the buck stops with the wafer manufacturer. The TSMC ecosystem really is like the semiconductor version of Disneyland, the happiest place on earth.

Another point that was made, which was a good reminder for me, is that we are at the beginning of the chiplet era and the semiconductor industry is very fast moving. Either you harness the power of chiplets or the power of chiplets will harness you, my opinion.

Also Read:

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023

Intel Keynote on Formal a Mind-Stretcher


Hardware Root of Trust for Automotive Safety

Hardware Root of Trust for Automotive Safety
by Daniel Payne on 04-13-2023 at 10:00 am

Rambus, RT 640,

Traveling by car is something that I take for granted and I just expect that my trips will be safe, yet our cars are increasingly using dozens of ECUs, SoCs and millions of lines of software code that combined together present a target for hackers or system failures. The Automotive Safety Integrity Levels (ASIL) are known by the letters: A, B, C, D; where the ISO 26262 standard defines ASIL D as the highest degree of automotive hazard. Reliability metrics for an automotive system are Single Point Fault Metric (SPFM) and Latent Fault Metric (LFM).

Siemens EDA worked together with Rambus on a functional safety evaluation for automotive using the RT-640 Embedded Hardware Security Module with about 3 million faults, reaching ISO 26262 ASIL-B certification, by achieving a SPFM > 90% and a LFM > 60%. The two Siemens tools used for functional safety evaluations were:

  • SafetyScope
    • Failures In Time (FIT)
    • Failure Mode Effect and Diagnostic Analysis (FMEDA) – permanent and transient faults, fault list
  • KaleidoScope
    • Fault simulation on the fault list
    • Fault detected, not detected, not observed

The Rambus RT-640 is a hardware security co-processor for automotive use, providing the root of trust, meeting the ISO 26262 ASIL-B requirements. Architectural blocks for the RT-640 include a RISC-V secure co-processor, secure memories and cryptographic accelerators.

Rambus RT-640 Root of Trust

Your automotive SoC would add an RT-640 to provide secure execution of user apps that are authenticated, stop tampering, provide secure storage, and thwart side-channel attacks. Software cannot even reach the critical tasks like key derivation done in hardware. All of the major processor architectures are supported: Intel, RISC-V, Arm.

Security warranties, and hardware cryptographic accelerators are supported, plus there’s protection against glitching and over-clocking.

For the functional safety evaluation there was a manually defined fault list for signals covered by the provided safety mechanism. SafetyScope then reported the estimated FMEDA metrics, so an initial idea of the core’s safety level. Modules that that didn’t affect the core safety or were not safety critical  were pruned from the fault list.

The Fault Tolerant Time Interval (FTTI) tells the tool how long to look for a fault to be propagated before an alarm is set. FTTI impacts fault simulation run times, so a balance is required. The max concurrent fault number was set between 600 to 1,000 faults from experimentation. A two-step fault campaign approach was used to get the best results in the least amount of time.

Unclassified faults were faults not injected and not observed, so to reduce the number of non-injected faults they used two reduction methods:

  • Bus-simplification – when one or more bits are detected for a certain fault, the safety mechanism works well. Faults on the remaining bits of the bus are also considered detected.
  • Duplication-simplification – all faults not injected or observed the are part of a duplicated module are classified as detected.

Both permanent and transient fault campaigns were run on the RT-640 co-processor, taking some 12 days to complete when run on an IBM LSF HPC environment with parallel execution. The estimated SPFM numbers came from the first run of SafetyScope.

RT-640 fault campaign results

These fault campaign results exceed the ISO 26262 requirements of SPFM > 90% and LFM > 60% for ASIL-B certification.

Summary

Siemens and Rambus showed a methodology to evaluate the RT-640 co-processor, with nearly 3 million faults, reaching a total SPFM value of 91.9%, plus a TFM of 75%, exceeded the requirements of the ASIL-B safety level in automotive applications.  This is good news for electronics systems used in cars, ensuring drivers that their travels are safer, drama-free and resistant to hacking efforts. Using a hardware root of trust like the Rambus RT-640 makes sense for safety-critical automotive applications, and the fault campaign results confirm it.

Read the complete 11 page white paper on the Siemens site.

Related Blogs

 


Siemens EDA on Managing Verification Complexity

Siemens EDA on Managing Verification Complexity
by Bernard Murphy on 04-13-2023 at 6:00 am

2023 DVCon Harry Foster

Harry Foster is Chief Scientist in Verification at Siemens EDA and has held roles in the DAC Executive Committee over multiple years. He gave a lunchtime talk at DVCon on the verification complexity topic. He is an accomplished speaker and always has a lot of interesting data to share, especially his takeaways from the Wilson Research Group reports on FPGA and ASIC trends in verification. He segued that analysis into a further appeal for his W. Edwards Deming-inspired pitch that managing complexity demands eliminating bugs early. Not just shift-left but a mindset shift.

Statistics/analysis

I won’t dive into the full Wilson Report, just some interesting takeaways that Harry shared. One question included in the most recent survey was on use of data mining or AI. 17% of reporting FPGA projects and 21% of ASIC projects said they had used some form of solution around their tool flows (not counting vendor supplied tool features). Most of these capabilities he said are home-grown. He anticipates opportunity to expand such flow-based solutions as we do more to enable interoperability (think log files and other collateral data which are often input to such analyses).

Another though provoking analysis was on first-silicon successes. Over 20 years of reports this is now at the lowest level at 24%, where typical responses over that period have hovered around 30%. Needing respins is of course expensive, especially for designs in 3nm. Digging deeper into the stats, about 50% of failures were attributable to logic/functional flaws. Harry said he found a big spike in analog flaws – almost doubling –  in 2020. Semiconductor Engineering held a panel to debate the topic; panelists agreed with the conclusion. Even more interesting the same conclusion held across multiple geometries; it wasn’t just a small feature size problem. Harry believes this issue is more attributable to increasing integration of analog into digital, for which design processes are not yet fully mature. The current Wilson report does not break down analog failure root causes. Harry said that subsequent reports will dig deeper here, also into safety and security issues.

Staffing

This a perennial hot topic, also covered in the Wilson report. From 2014 to 2022 survey respondents report a 50% increase in design engineers but a 144% increase in verification engineers over the same period, now showing as many verification engineers on a project as design engineers. This is just for self-identified roles. The design engineers report they spend about 50% of their time in verification-related tasks. Whatever budget numbers you subscribe to for verification, those number are growing much faster than design as a percentage of total staffing budget.

Wilson notes that the verification to design ratio is even higher still in some market segments, more like 5 to 1 for processor designs. Harry added that they are starting to see similar ratios in automotive design which he finds surprising. Perhaps attributable to complexity added by AI subsystems and safety?

The cost of quality

Wilson updates where time is spent in verification, now attributing 47% to debug. Half of the verification budget is going into debug. Our first reaction to such a large amount of time being spent in debug is to improve debug tools. I have written about this elsewhere, especially in using AI to improve debug throughput. This is indeed an important area for focus. However Harry suggests we should also turn to lessons from W. Edwards Deming the father of the quality movement. An equally important way to reduce the amount of time spent in debug is to reduce the number of bugs created. Well duh!

Deming’s central thesis was that quality can’t be inspected into a product. It must be built in by reducing the number of bugs you create in the first place. This is common practice in fabs, OSATs and frankly any large-scale manufacturing operation. Design out and weed out bugs before they even get into the mainstream flow. We think of this as shift left but it is actually more than that. Trapping bugs not just early in the design flow but at RTL checkin through static and formal tests applied as a pre-checkin signoff. The same tests should also be run in regression, but for confirmation, not to find bugs that could have been caught before starting a regression.

A larger point is that a very effective way to reduce bugs is to switch to a higher-level programming language. Industry wisdom generally holds that new code will commonly contain between 15 and 50 bugs per 1,000 lines of code. This rate seems to hold independent of the language or level of abstraction. Here’s another mindset shift. In 1,000 lines of new RTL you can expect 15 to 50 bugs. Replace that with 100 lines of a higher-level language implementing the same functionality and you can expect ~2 to 5 bugs. Representing a significant reduction in the time you will need to spend in debug.

Back to Wilson on this topic. 40% of FPGA projects and 30% of ASIC projects claim they are using high level languages. Harry attributes this relatively high adoption to signal processing applications, perhaps because HLS has had enthusiastic adoption in signal processing, an increasingly important domain these days (video, audio, sensing). But SystemC isn’t the only option. Halide is popular at least in academia for GPU design. As architecture becomes more important for modern SoCs I can see the trend extending to other domains through other domain specific languages.

Here are a collection of relevant links: Harry’s analysis of the 2022 verification study, the Wilson 2022 report on IC/ASIC verification trends, the Wilson 2022 report on FPGA verification trends and a podcast series on the results.

 

 


Podcast EP153: Suk Lee’s Journey to Intel Foundry Services, with a Look to the Future

Podcast EP153: Suk Lee’s Journey to Intel Foundry Services, with a Look to the Future
by Daniel Nenni on 04-12-2023 at 12:00 pm

Dan is joined by Suk Lee, Vice President of Design Ecosystem Development at Intel Foundry Services. He has over 35 years of experience in the semiconductor industry, with engineering, marketing, and general management positions at LSI Logic, Cadence, TI, Magma Design Automation and TSMC. At TSMC, he was responsible for managing the third party partners making up the OIP Ecosystem, and created the OIP Ecosystem Forum, the premier Ecosystem event in the Foundry Industry.

Suk discusses his journey through semiconductors, EDA and ultimately the foundry business. Dan explores the reasons Suk joined Intel Foundry Services, their focus and what the future holds for the organization in the changing semiconductor landscape.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization
by Daniel Nenni on 04-12-2023 at 10:00 am

Updated Speaker Slide

Are you developing or thinking about developing your own RISC-V processor? You’re not alone. The use of the RISC-V ISA to develop processors for SoCs is a growing trend. RISC-V offers a lot of flexibility with the ability to customize or create ISA and microarchitectural extensions to differentiate your design no matter your application area: AI, machine learning, automotive, data center, mobile, or consumer. Proprietary cores with custom extensions are often highly complex and have traditionally required an equally elevated level of expertise to design them. Only those with deep knowledge and skills have successfully met the challenges associated with evaluating the impact of design decisions on power, performance, and area (PPA). But even experts can admit that this process can take an exceptionally long time and full optimization of these parameters may not be obtained.

Join us for a webinar replay and learn how to overcome these challenges and take the risk out of developing your own RISC-V processor.

Synopsys is dedicated to addressing the challenges facing RISC-V-based design with a portfolio of industry leading solutions that can bring RISC-V designs to life faster and easier. This Synopsys webinar will cover two tools:  Synopsys ASIP Designer and Synopsys RTL Architect. These tools can help chip designers create highly customized processors faster while meeting the desired PPA targets with confidence. We will also show these solutions in action with a real-world case study that will highlight their interoperability and the results that can be achieved.

Synopsys ASIP Designer is the leading tool-suite to design and program Application Specific Instruction-set Processors (ASIPs).  From a user-defined processor model, capturing the ASIP’s instruction-set and micro-architecture in the architecture description language nML, ASIP Designer automatically creates both a complete software development kit with an efficient C/C++ compiler, and a synthesizable RTL implementation of the ASIP.

The complexity of RISC-V chips and restrictive advanced node rules have made it more difficult for implementation tools to achieve power, performance, and area (PPA) targets. Synopsys RTL Architect is the industry’s first physical aware, RTL analysis, exploration, and optimization environment. The solution enables designers to “shift left” and predict the implementation impact of RTL significantly reducing RTL development time and creating better RTL.

This online seminar will present a new interoperability solution that facilitates a “Synthesis-in-the-Loop” design approach, both during earlier architectural design stages with processor model modifications and during RTL implementation.  Synopsys ASIP Designer’s RTL generation tool has been extended with an “ASIP RTL Explorer” utility that can systematically generate multiple RTL implementations of the ASIP with different design options.  Then, using Synopsys RTL Architect’s parallel exploration capabilities, designers can perform a comparative analysis of these RTL variants with respect to performance, power, and area (PPA).

We will illustrate the effectiveness of the new interoperability solution with a case study of a RISC-V ISA extended ASIP design for an AI-optimized MobileNet v3 inference, for which we want to find an energy-efficient implementation.  We will show how Synopsys ASIP Designer’s RTL Explorer generated 7 RTL variants for this ASIP and how Synopsys RTL Architect was used to compare and analyze the power consumption of these alternatives quickly and accurately.

The new interoperability solution reinforces Synopsys ASIP Designer’s Synthesis-in-the-Loop methodology and brings another productivity gain in the design of SoCs with programmable accelerators.

Join us for our Synopsys webinar to remove risk from your RISC-V processor development. Register today and watch the replay!

Also Read:

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023

Full-Stack, AI-driven EDA Suite for Chipmakers

Power Delivery Network Analysis in DRAM Design


Optimizing Return on Investment (ROI) of Emulator Resources

Optimizing Return on Investment (ROI) of Emulator Resources
by Kalar Rajendiran on 04-12-2023 at 6:00 am

Verification Options SW vs HAV

Modern day chips are increasingly complex with stringent quality requirements, very demanding performance requirement and very low power consumption requirement. Verification of these chips is very time consuming and accounts for approximately 70% of the simulation workload on EDA server farms. As software-based simulators are too slow for many requirements, hardware assisted verification (HAV) technologies are finding increased use for many different purposes.  Emulators are used for pre-silicon software development, hardware/software co-verification, debugging and in-circuit emulation (ICE). While emulators improve throughput, they are expensive as even a single emulator can cost millions of dollars. Add to this the cost of a dedicated team to support the specialized workflows, and we are talking of a very expensive proposition.

Given the large investment involved with using emulators, it is natural to expect to maximize the return on investment (ROI), which means the emulators must be used very efficiently. If only running ICE jobs, the emulators may remain idle overnight and during weekends, translating to very low utilization of this expensive resource. Utilization can be increased by running both ICE jobs and overnight batch jobs. For this, designs must be compiled from RTL code and emulation boards must be allocated and programmed. And virtual target devices such as PCI, USB and video controllers must be soft-assigned before jobs can run.

According to Global Market Insights, the HAV market is expected to exceed $15 billion by 2027, representing a CAGR of over 15%. Investments in HAV tools is on the rise and investments in hardware emulation now exceed software-based verification, growing to $718 million in 2020. With this growth trend, optimizing the use of HAV resources takes on added importance.

HAV Optimization Challenges

Organizations often use hard-partitioning strategies to allocate emulator resources among teams. However, these allocations may be incompatible with the needs of simulation acceleration (SA) jobs. Emulation users refer to the challenge of efficiently packing workloads as the Tetris problem. There is also the need to manage the time required to cut over between workloads. And often the emulation environment involves hardware from multiple emulation vendors. This compounds the scheduling challenge as different emulators have different topology characteristics.

Schedulers need to:

  • Account for existing utilization and interactive jobs when placing new workloads
  • Schedule and share a limited number of emulated peripheral devices
  • Consider long lead times and workflows required to compile designs and load them into the emulator
  • Accommodate design teams requesting hard usage windows and future resource reservations

Because of all these challenges, emulation workloads are still managed manually in many environments.  When multiple groups compete for the same emulation resources, manual management becomes an issue. The allocation of hardware emulation resources should be automated for greater efficiency and throughput.

Altair’s Hardware Emulator Resource Optimizer (Hero)

Altair® Hero™ is an end-to-end solution designed specifically for hardware emulation environments, addressing all aspects of emulation flow including design compilation, emulator selection, and software and regression tests. Hero’s vendor-independent architecture and comprehensive policy management features provide organizations with flexibility and control.

Hero supports a variety of hardware-assisted verification platforms, including traditional software-based tools as well as hardware emulators based on custom processors and FPGAs. It is designed to be emulator-agnostic, providing a generic scheduling model that treats boards and modules as “leaf” resources, making it adaptable to most commercially available emulation platforms.

Key features of Hero 2.0 include policy management including FairShare and preemption, soft reservations enabling users to reserve blocks of time on an emulator in advance, visibility to emulator-specific metrics for hardware asset optimization and organizational planning, a rich GUI to simplify monitoring and determine the root cause of failing jobs, and support for emulation and prototyping platforms from multiple vendors.

Hero enables emulator users to benefit from the same kinds of policy-based controls common in software verification environments, such as attaching different priorities to different emulation jobs, applying FairShare policies to manage sharing of emulator resources, and preemption ensuring the resources are available during business hours for interactive ICE activities. Hero also provides granular, real-time visibility to emulator resources, including visibility to runtime host allocations and the boards and modules used across the various emulators.

Summary

As per Altair, Hero is the only scheduler to optimize use of resources across multiple emulators and hardware-assisted verification platforms. Altair has published two whitepapers around Hero to address the topic of optimizing the use of emulation resources. Those involved with running workloads on emulators would find these two whitepapers very informative.

The first whitepaper covers the details of how Hero helps maximize the ROI of customers’ emulator resources. This whitepaper goes into many intricate details as they relate to hardware emulation. For example, it identifies a number of business tangible metrics to track to evaluate the utilization efficiency of emulation resources rather than the simplistic metric of percentage of emulator gates used. The second whitepaper is more like an application notes on how users can apply the features of Hero for their emulation jobs.

For more details, visit the Altair Hero product page.

Also Read:

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

Load-Managing Verification Hardware Acceleration in the Cloud

Altair at #59DAC with the Concept Engineering Acquisition