SiC Forum2025 8 Static v3

AMD Puts Synopsys AI Verification Tools to the Test

AMD Puts Synopsys AI Verification Tools to the Test
by Mike Gianfagna on 08-28-2023 at 6:00 am

AMD Puts Synopsys AI Verification Tools to the Test

The various algorithms that comprise artificial intelligence (AI) are finding their way into the chip design flow. What is driving a lot of this work is the complexity explosion of new chip designs required to accelerate advanced AI algorithms. It turns out AI is both the problem and the solution in this case. AI can be used to cut the AI chip design problem down to size. Synopsys has been developing AI-assisted design capabilities for quite a while, beginning with the release of a design space optimization capability (DSO.ai) in 2020. Since then, several new capabilities have been announced, significantly expanding its AI-assisted footprint. You can get a good overview of what Synopsys is working on here. One of the capabilities in the Synopsys portfolio focuses on verification space optimization (VSO.ai). The real test of any new capability is its use by a real customer on a real design, and that is the topic of this post. Read on to see how AMD puts Synopsys AI verification tools to the test.

VSO.ai – What it Does

Test coverage of a design is the core issue in semiconductor verification. The battle cry is, “if you haven’t exercised it, you haven’t verified it.” Stimulus vectors are generated using a variety of techniques, with constrained random being a popular approach. Those vectors are then used in simulation runs on the design, looking for test results that don’t match expected results.

By exercising more of the circuit, the chance of finding functional design flaws is increased.

Verification teams choose structural code coverage metrics (line, expression, block, etc.) of interest and automatically add them to simulation runs. As each test iteration generates constrained-random stimulus conforming to the rules, the simulator collects metrics for all the forms of coverage included. The results are monitored, with the goal of tweaking the constraints to try to improve the coverage. At some point, the team decides that they have done the best that they can within the schedule and resource constraints of the project, and they tape out.

Code coverage does not reflect the intended functionality of the design, so user-defined coverage is important. This is typically a manual effort, spanning only a limited percentage of the design’s behavior. Closing coverage and achieving verification goals is quite difficult.

A typical chip project runs many thousands of constrained-random simulation tests with a great deal of repetitive activity in the design. So, the rate of new coverage slows, and the benefit of each new test reduces over time.

At some point, the curve flattens out, often before goals are met. The team must try to figure out what is going on and improve coverage as much as possible within time and resource constraints. This “last mile” of the process is quite challenging. The amount of data collected is overwhelming and trying to analyze it and determine the root cause of a coverage hole is difficult and labor-intensive. is it an illegal bin for this configuration or a true coverage hole?

The design of complex chips contains many problems that look like this – the requirement to analyze vast amounts of data and identify the best path forward. The good news is that AI techniques can be applied to this class of problems quite successfully.

For coverage definition, Synopsys VSO.ai infers some types of coverage beyond traditional code coverage to complement user-specified coverage. Machine learning (ML) can learn from experience and intelligently reuse coverage when appropriate. Even during a single project, learnings from earlier coverage results can help to improve coverage models.

VSO.ai works at the coarse-grained test level and provides automated, adaptive test optimization that learns as the results change. Running the tests with highest ROI first while eliminating redundant tests accelerates coverage closure and saves compute resources.

The tool also works at the fine-grained level within the simulator to improve the test quality of results (QoR) by adapting the constrained-random stimulus to better target unexercised coverage points. This not only accelerates coverage closure, but also drives convergence to a higher percentage value.

The last mile closure challenge is addressed by automated, AI-driven analysis of coverage results. VSO.ai performs root cause analysis (RCA) to determine why specific coverage points are not being reached. If the tool can resolve the situation itself, it will. Otherwise, it presents the team with actionable results, such as identifying conflicting constraints.

The figure below summarizes the benefits VSO.ai can deliver. A top-level benefit of these approaches is the achievement of superior results in less time with less designer effort. We will re-visit this statement in a moment.

The Benefits of VSO.ai

What AMD Found

At the recent Synopsys Users Group (SNUG) held in Silicon Valley, AMD presented a paper entitled, “Drop the Blindfold: Coverage-Regression Optimization in Constrained-Random Simulations using VSO.ai (Verification Space Optimization).”  The paper detailed AMD’s experiences using VSO.ai on several designs. AMD had substantial goals and expectations for this work:

Reach 100% coverage consistently with small RTL changes and design variants, but in an optimized, automated way.

AMD applied a well-documented methodology using VSO.ai across regression samples for four different designs. The figure below summarizes these four experiments.

Regression Characteristics Across Four Designs

AMD then presented a detailed overview of these designs, their challenges and the results achieved by using VSO.ai, compared to the original effort without VSO.ai. Recall one of the hallmark benefits of applying AI to the design process:

Achievement of superior results in less time with less designer effort

In its SNUG presentation, awarded one of the Top 10 Best Presentations at the event, AMD summarized the observed benefits as follows:

  • 1.5 – 16X reduction in the number of tests being run across the four designs to achieve the same coverage
  • Quick, on-demand regression qualifier
    • Can be used to gauge how well the test distribution of a regression is if user is uncertain on iterations needed
  • Potentially target more bins under same budget
    • If default regression(s) do not achieve 100% coverage, VSO.ai can potentially exceed this (i.e., experiment #1)
  • Testcase(s) removal in coverage regressions if not contributing
  • More reliable test grading for constrained random tests
    • URG (Unified report generator): seed-based v/s
    • VSO.ai: probability-based
  • Debug
    • Uncover coverage items that have a lower probability of being hit than expected

This presentation put VSO.ai to the test and the positive impact of the tool was documented.  As mentioned, this kind of user application to real designs is the real test of a new technology. And that’s how AMD puts Synopsys AI verification tools to the test.

Also Read:

WEBINAR: Why Rigorous Testing is So Important for PCI Express 6.0

Next-Gen AI Engine for Intelligent Vision Applications

VC Formal Enabled QED Proofs on a RISC-V Core


Podcast EP178: An Overview of Advanced Power Optimization at Synopsys with William Ruby

Podcast EP178: An Overview of Advanced Power Optimization at Synopsys with William Ruby
by Daniel Nenni on 08-25-2023 at 10:00 am

Dan is joined by William Ruby, director of product management for Synopsys Power Analysis products. He has extensive experience in the area of low-power IC design and design methodology, and has held senior engineering and product marketing positions with Cadence, ANSYS, Intel, and Siemens. He also has a patent in high-speed cache memory design.

Dan explores new approaches to power analysis and power optimization with William, who explains strategies for increasing accuracy of early power analysis, when there is more opportunity to optimize the design. Enhanced modeling techniques and new approaches to computing power are discussed. The benefits of emulation for workload-based power analysis are also explored.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The First TSMC CEO James E. Dykes

The First TSMC CEO James E. Dykes
by Daniel Nenni on 08-25-2023 at 6:00 am

James Dykes TSMC CEO (1)

Most people ( including ChatGPT) think Morris Chang was the first TSMC CEO but it was in fact Jim Dykes, a very interesting character in the semiconductor industry.

According to his eulogy: Jim came from the humblest of beginnings, easily sharing that he grew up in a house without running water and never had a bed of his own. But because of his own drive, coupled with compassion, leadership, and intelligence, he was indeed a genuine “success story.” He was honored in his profession with awards too numerous to list. During his long career he held leadership positions in several companies, including Radiation, Harris, General Electric, Philips North America and TSMC in Taiwan. His work took him to locales in Florida, California, North Carolina and Texas as well as overseas, but he returned to his Florida roots to retire, living both in Fort McCoy and St. Augustine.

Jim was known around the semiconductor industry as a friendly, funny, approachable person. I did not know him but some of my inner circle did. According to semiconductor lore, Jim Dykes was forced on Morris Chang by the TSMC Board of Directors due to his GE Semiconductor experience and Philips connections. Unfortunately Jim and Morris were polar opposites and didn’t get along. Jim left TSMC inside the two year mark and was replaced by Morris himself. Morris didn’t like Philips looking over his shoulder and stated that the TSMC CEO must be Taiwanese and he was not wrong in my opinion. Morris then hired Don Brooks as President of TSMC. I will write more about Don Brooks next because he had a lasting influence on TSMC that is not generally known.

One thing Jim left behind that is searchable is industry presentations. My good friend and co-author Paul McLellan covered Jim’s “Four Little Dragons of the Orient and an Emerging Role Model for Semiconductor Companies” presentation quite nicely HERE. This presentation was made in January of 1988 while Jim was just starting as CEO of TSMC. I have a PDF copy in case you are interested.

“I maintain we are no less than a precursor of an entirely new way of doing business in semiconductors. We are a value-added manufacturer with a unique charter… We can have no designs or product of our own. T-S-M-C was established to bridge the gap between what our customers can design and what they can market.”

“We consider ourselves to be a strategic manufacturing resource, not an opportunistic one. We exist because today’s semiconductor companies and users need a manufacturing partner they can trust and our approach, where we and our customers in effect spread costs among many users, yet achieve the economics each seeks, makes it a win-win for everyone.”

So from the very beginning TSMC’s goal was to be the Trusted Foundry Partner which still stands today. From the current TSMC vision and mission statement:

“Our mission is to be the trusted technology and capacity provider of the global logic IC industry for years to come.”

Another interesting Jim Dykes presentation “TSMC Outlook May 1988” is on SemiWiki. It is more about Taiwan than TSMC but interesting  just the same.

“Taiwan, by comparison, is more like Silicon Valley. You find in Taiwan the same entrepreneurial spirit the same willingness to trade hard work for business success and the opportunities to make it happen, that you find in Santa Clara County, and here in the Valley of the Sun. Even Taiwan’s version of Wall Street will seem familiar to many of you. There’s a red-hot stock market where an entrepreneur can take a company public and become rich overnight.”

I agree with this statement 100% and experienced it first hand in the 1990s through today, absolutely.

I was also able to dig up a Jim Dykes presentation “TO BE OR NOT TO BE” from 1982 when he was VP of the Semiconductor Division at GE. In this paper Jim talks about the pros and cons of being a captive semiconductor manufacturer. Captive is what we now call system fabless companies or companies that make their own chips for complete systems they sell (Apple). Remember, at the time, computer system companies were driving the semiconductor industry and had their own fabs: IBM, HP, DEC, DG, etc… so we have come full circle with systems companies making their own chips again.

Speaking of DG (Data General), I read Soul of a New Machine by Tracy Kidder during my undergraduate studies and absolutely fell in love with the technology. In fact, after graduating, I went to work for DG which was featured in the book.

I have a PDF copy of Jim’s “TO BE OR NOT TO BE” presentation in case you are interested.

Also read:

How Philips Saved TSMC

Morris Chang’s Journey to Taiwan and TSMC

How Taiwan Saved the Semiconductor Industry


Empyrean visit at #60DAC

Empyrean visit at #60DAC
by Daniel Payne on 08-24-2023 at 6:00 am

Patron EM IR flow min

I arrived for my #60DAC booth appointment at Empyrean and was able to watch a customer presentation from Jason Guo, of Diodes. Jason was talking about how his company used the Patron tool for EM/IR analysis on their automotive chips. Diodes was found back in 1959 at Plano, Texas, and has since grown into 32 locations around the globe, offering chips for logic, analog, power management, precision timing and interconnect.

Diodes has also used the Empyrean ALPS tool for AMS simulation. For an EM analysis flow they use ALPS for circuit simulation plus Patron to see the EM, IR and pin voltages. They can quickly view the EM layout violations, then make fixes to the layout. Mr. Guo said that they’ve used Empyrean tools for about two years now, and that they are easy to learn and use.

Patron EM/IR design flow

The layout viewer is called Skipper, and the colors displayed represent voltage drops (IR), where red is a violation.

After the customer presentation I talked with Jason Xing of Empyrean to get an update on what’s new in the last 12 months. Mr. Xing that Empyrean now has a complete custom AMS design and verification tool flow, consisting of tools for:

  • Schematic Capture
  • Custom IC layout
  • SPICE circuit simulation
  • DRC and LVS checking
  • EM/IR analysis

Designers of Power Management ICs (PMIC) can use Empyrean tools for both design and verification.

Something new for 2023 is standard cell and memory characterization, with a tool called Empyrean Liberal. Their approach for characterization uses a Static Timing Analysis (STA) method to measure delays with exhaustive searching, and no missed timing arcs. These tools are cloud ready for speeding up characterization run times, and they support LVF, an extension to the Liberty format to add statistical timing variation to the measurements.

RF circuit designers can use the Empyrean ALPS-RF circuit simulator for both frequency and time-domain simulations, supporting large signal, small signal and noise analysis.

The company has about 900 people now, and they went public in July 2022 on the China, Shenzhen exchange. Some 600 customers are using Empyrean EDA tools, and even the foundries are using their tools. Their headquarters are in Beijing, then R&D is done in Nanjing, Chengdu, Shanghai and Shenzhen.

Happy customers also include Willsemi, using the Empyrean Polas tool for reliability analysis of PMICs by measuring Rdson and performing EM analysis. Monolithic Power Systems (MPS) also uses the Polas power layout analysis tool. Renesas does SoC designs with complex clocking structures, and the Empyrean ClockExplorer tool helped improve the quality of their clock structures. O2Micro used the Empyrean AMS flow with the TowerJazz iPDK for their Power IC and analog design projects.

Summary

Empyrean has been blogged about here on SemiWiki since 2019, and I thoroughly enjoyed visiting their booth in July to see their new products and growth in the EDA industry. Their point tools have grown into tool flows supporting custom IC design and even flat panel design. I look forward to visiting them again in 2024 at DAC to report on new developments.

Related Blogs

 


Using Linting to Write Error-Free Testbench Code

Using Linting to Write Error-Free Testbench Code
by Daniel Nenni on 08-23-2023 at 10:00 am

AMIQ EDA Design and Verification

In my job, I have the privilege to talk to hundreds of interesting companies in many areas of semiconductor development. One of the most fun things for me is interviewing customers—hands-on users—of specific electronic design (EDA) tools and chip technologies. Cristian Amitroaie, CEO of AMIQ EDA, has been very helpful in introducing me to both commercial and academic users of his company’s Design and Verification Tools (DVT) Integrated Development Environment (IDE) family of products.

Recently, Cristian connected me with Lars Wolf, Harald Widiger, Daniel Oprica, and Christian Boigs from Siemens. They kindly shared their time with me to talk about their experiences with AMIQ’s Verissimo SystemVerilog Linter.

SemiWiki: Can you please tell us a bit about your group and what you do?

Siemens: We are members of a 10-15 person verification team at Siemens and part of a department that does turnkey development of application-specific integrated circuits (ASICs) for factory automation products within the company. Our team of experts focuses on verification IP (VIP), developing new VIP components and also reusing and adapting existing VIP.

SemiWiki: What are your biggest challenges?

Siemens: We have all the usual issues of any project, such as limited resources, tight schedules, and increasing complexity. But there are two specific challenges that led us to look at Verissimo as a possible solution.

First, since our VIP can be used by many projects, we have a very high standard of quality. We don’t want our ASIC design teams debugging problems that turn out to be issues in our VIP, so we must provide them with error-free models, testbenches, and tests. Of course, the better the verification environment, the better the ASICs that we provide to the product teams.

The second challenge involves the extension of our development landscape to incorporate SystemVerilog and the Universal Verification Methodology (UVM) for our projects. At the time, many of our engineers were not yet experts in this domain, so we were looking for tools that would help them learn and help them write the best possible code.

SemiWiki: So, you thought that a SystemVerilog/UVM linting tool would help?

Siemens: Yes, we were looking specifically for such a solution. The whole point of linting is to identify and fix errors so that the resulting code is correct. We believed that the engineers would learn over time to avoid many of these errors and make code development faster and smoother. We considered several options and ended up choosing Verissimo from AMIQ EDA.

SemiWiki: What was the process for getting the team up and running with the tool?

Siemens: It’s built on an IDE, so it’s easy to use and it provides all sorts of aids in navigating through code and fixing errors. Most engineers used it successfully after minimal training. We spent much of our effort refining the linting rules checked. Verissimo has more than 800 out-of-the-box rules, and some were more important to us than others. We started with the default ruleset and then turned off the checks that we didn’t need for one reason or another. We ended up with about 510 rules enabled. Every rule must be explainable and understandable by every verification engineer.

SemiWiki: Is this ruleset static?

Siemens: No; we meet regularly to review the rules and to consider adding new ones since AMIQ EDA is always offering more. On average, we add four or five rules every month. We try to keep up with new rules and new features in Verissimo so we’re always getting the maximum benefit for our team.

SemiWiki: Are there any particular rules that impressed you?

Siemens: We know that a lot of the rules were added due to user demand, and in general we also find these rules very useful. There are some rules that cover aspects of SystemVerilog that we hadn’t previously considered, such as detecting dead code, identifying copy-and-paste code, and pointing out coding styles that may reduce simulation performance. We were especially intrigued by the random stability checks. Initially we took reproduction of random stimulus for granted, but we learned that it doesn’t happen without proper coding style.

SemiWiki: How is Verissimo run in your verification flow?

Siemens: We encourage our engineers to run linting checks as they write their code, but we do not require them to do so. We considered making a linting run a requirement for code commit, but we didn’t want engineers to consider waiving possible errors just to get through the check-in gate. We require the flexibility to commit code that may not yet be perfect but is needed to get the testbench to compile and run regressions.

We decided instead to make Verissimo part of our daily regression run. Using a common ruleset ensures consistency in coding style and adoption of best practices across the entire team. Verissimo results are included in our regression dashboard and tracked over time, along with code coverage and pass/fail results from regression tests. Any linting errors and error waivers are discussed during code reviews as part of making the VIP as clean and reusable as possible.

SemiWiki: Do you see any resistance to linting among your engineers?

Siemens: We honestly didn’t know what to expect in this regard, and we have been pleasantly surprised. We have a small, cohesive team and there is no debate over using linting as part of our process. There is also no abuse of error waivers, which are reviewed carefully and used only as a last resort.

SemiWiki: Has Verissimo lived up to your expectations?

Siemens: It certainly has addressed the two challenges that led to us looking for a linting solution: high quality and coding guidance. We now have confidence that our VIP is lint error-free, with no syntax or semantic errors, and compliant to our coding rules. Our VIP is more reusable, maintainable, and manageable. Verissimo has also proven to be a very good learning tool. As we discuss rules and debug linting errors, we understand both SystemVerilog and UVM better, and we think more deeply about our code.

SemiWiki: How has your experience been working with AMIQ EDA?

Siemens: It’s more of a partnership than a pure vendor-customer relationship. Early in our engagement, we compared our coding guidelines with the Verissimo rules, and asked AMIQ EDA to add some new rules plus adjustments and new parameters for some existing rules. Of course, as with any piece of software, we’ve found a few bugs in the tool itself. In all cases, we have found them to be responsive and supportive.

SemiWiki: Do you plan to change the way that you use Verissimo in the future?

Siemens: Since we have been successful so far, we plan to continue everything we are doing now on all new VIP projects. There are two areas where we would like to improve a bit. Our goal was to meet every two weeks to discuss linting rules, errors, and waivers, but we haven’t always done that. We would like to make those meetings more regular. We would also like to update Verissimo releases more often throughout the project so that we can take advantage of new rules that require new capabilities in the tool.

SemiWiki: Gentlemen, thank you very much for your time. It is great that you have had so much success with adding linting to your testbench and VIP development flow.

Siemens: It has been our pleasure.

Also Read:

A Hardware IDE for VS Code Fans

Using an IDE to Accelerate Hardware Language Learning

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family


Predictive Maintenance in the Context of Automotive Functional Safety

Predictive Maintenance in the Context of Automotive Functional Safety
by Kalar Rajendiran on 08-23-2023 at 6:00 am

Margin Agent to Measure Performance and Degradation

The automotive industry is undergoing a major transformation. The convergence of electrification, connectivity, driver-assistance technologies, and software-defined vehicles has led to the rise of use of advanced System-on-Chips (SoCs) that drive unprecedented levels of functionality and performance. However, this transformation also raises concerns about the safety and reliability of these complex systems. This shift presents unique challenges and opportunities in ensuring the performance, reliability, and safety of automotive systems. As automotive technology advances, the traditional approaches to functional safety face significant disruptions. The traditional reactive approach to functional safety, which involves addressing issues after they occur, is insufficient for the complex and interconnected automotive systems of today. Proactive approaches are needed for predicting failures, anticipating risks, and actively mitigating them before they cause significant disruptions. Maintaining the utmost resilience and operational efficiency of automotive systems, calls for continuous monitoring and predictive insights.

proteanTecs has published a whitepaper that explores the shifting landscape of automotive functional safety and the required methodologies to ensure the safe and reliable operation of these intricate automotive systems. The whitepaper delves into the implications of data-driven advancements in artificial intelligence, machine learning, and data analytics for automotive functional safety and useful life extension. It explores proactive approaches that transcend traditional reactive measures, enabling stakeholders to anticipate failures and proactively mitigate risks.

This whitepaper is an excellent read for everyone involved in the development and deployment of automotive functional safety systems. Following are some excerpts from that whitepaper.

Hardware Failure Anticipation through Prognostic Techniques

One of the key challenges in ensuring functional safety is anticipating hardware failures. With the application of prognostic techniques, which involve analyzing data to predict the future reliability of components or systems, automotive stakeholders can anticipate potential hardware failures. This allows them to take preventive measures before they occur, thus enhancing the safety and longevity of automotive systems.

Understanding the Impact of Defects on System Lifetime

Defects innate to manufacturing or caused by operating environment and usage patterns can significantly affect the lifetime of automotive systems. Analyzing defect-induced failures and understanding their occurrence can help in estimating the Failure Rate (FR) of electronic devices. Equipped with this insight, automotive manufacturers can focus on improving the reliability of electronic components, thereby ensuring the longevity of the systems and enhancing functional safety.

Time-To-Failure (TTF) Predictions and Reliability Improvement

Estimating the TTF is another critical aspect in understanding the reliability of automotive components and systems. TTF predictions involve monitoring device performance to estimate when a device will fail. By combining observed (empirical field data) and predicted failures, reliability parameters can be estimated more quickly. TTF predictions can be leveraged to gain insights into potential failure scenarios and take preemptive actions. These preemptive actions enhance the overall functional safety of automotive systems.

Leveraging Deep Data On-Chip Monitors and Degradation Modeling

Monitoring the margin degradation of Integrated Circuits (ICs) is essential to estimating FR and TTF of automotive systems. The implementation of deep data on-chip monitors and degradation modeling based on Physics-of-Failure principles are essential for continuous monitoring of ICs. This methodology provides real-time data for proactive decision-making and preemptive actions, thereby enhancing functional safety. The method involves using IC-embedded circuits called “Agents” strategically placed within the device to monitor degradation over time without interrupting normal operations. The Agents provide high-resolution data on chip parameters and degradation, allowing for the prediction of TTF for individual devices based on manufacturing parameters and mission history.

The Figure below shows how the Agents are connected to the monitored logic to measure timing margin of the logical paths.

During normal IC operations, the worst-case margin of the monitored logical paths is stored in the Agent and the data can be read at any time.

Estimating Remaining Useful Life (RUL) and Preventing Future Failures

By reading Agent data during reliability stress tests, the primary degradation mechanism can be determined, and TTF prediction algorithms can be used to estimate the remaining lifetime of devices. The ability to estimate RUL of automotive components is crucial for prescriptive measures and risk mitigation. Machine learning algorithms and predictive analytics can be applied to estimate RUL and prevent future failures. By identifying potential points of failure in advance, automotive manufacturers can implement preventive measures to ensure safety and operational efficiency.

Extending Useful Life

Prescriptive maintenance, which is another concept, recommends actions to change the future outcome by adapting the operational conditions of the device. While some intrinsic failures are difficult to predict, device aging can be modeled, and operational workload can be reduced to minimize stress. This may involve restricting software processes, adjusting voltage and frequency, or employing “limp-home mode” strategies. By reducing the operational stress, the occurrence of wear-out faults can be delayed, leading to an extended useful lifetime for the device.

The Figure below shows how a noticeable extension of useful life time can be achieved through a combination of predictive and prescriptive maintenance as the FR remains lower than 100 FIT for a longer period.

Summary

The ever-changing landscape of advanced SoCs in the automotive industry demands a fresh perspective on functional safety. Embracing proactive approaches, harnessing data-driven insights, and leveraging advanced techniques such as prognostics, degradation modeling, and predictive analytics are essential to support the transforming auto industry. By embracing continuous monitoring and predictive insights, automotive manufacturers and OEMs can achieve unprecedented levels of resilience, robustness, and operational efficiency from ICs to ECUs.

You can download the entire whitepaper from here. To learn more about proteanTecs technology and solutions, visit www.proteanTecs.com.

By ensuring the utmost performance, reliability, and safety of automotive systems, stakeholders can unlock the full potential of electrification, connectivity, and driver-assistance technologies while addressing the functional safety challenges of the modern automotive landscape.

Also Read:

Semico Research Quantifies the Business Impact of Deep Data Analytics, Concludes It Accelerates SoC TTM by Six Months

Maintaining Vehicles of the Future Using Deep Data Analytics

Webinar: The Data Revolution of Semiconductor Production


Samtec Innovates a New Approach for High-Frequency Analog Signal Propagation

Samtec Innovates a New Approach for High-Frequency Analog Signal Propagation
by Mike Gianfagna on 08-22-2023 at 10:00 am

Samtec Innovates a New Approach for High Frequency Analog Signal Propagation

We are all familiar with the tradeoffs between copper and fiber for digital signal transmission. The short version is that fiber is flexible, like copper but supports higher data rates with less loss over longer distances. The bad news is that converting digital signals to light and back again isn’t a trivial process. These kinds of challenges also exist in the world of high-frequency analog transmission. For millimeter-wave applications, co-axial cable is the standard, just as copper is for digital. For higher frequencies and lower losses, waveguides are the step-up technology. This approach requires rigid structures that take up a lot of room and don’t route well. See the photo above. That is, until Samtec invented the first flexible waveguide cable. Read on to see how Samtec innovates a new approach for high-frequency analog signal propagation.

The Story of Waveguides

Waveguides

Thanks to Wikipedia, you can get a good overview of waveguide technology, its applications and its history here. As the name implies, these structures guide high-frequency signals through a defined, low-loss path as the signal propagates through air. The structures involved can be quite impressive, almost artistic in nature.

While these structures are eye-catching, they defy the mandate of every system design for a compact, efficient form factor.

Fortunately, the folks at Samtec have a nack for inventing new ways to propagate signals that is efficient, low loss and fits all design parameters. That core competency has allowed the company to re-invent the waveguide concept.

The Samtec Approach to Waveguides

What if waveguides could have a small profile and facilitate flexible routes with no loss of performance? This is exactly what Samtec is delivering, raising the bar for system design once again. Its next generation waveguide technology supports high frequency, low loss performance with a small form factor flexible cable. The product line appears to be a first for the industry.

Samtec waveguide

This new, high-frequency micro waveguide technology supports the demands of next generation millimeter wave systems. It uses a cable design allowing flexibility and a reduced size and supports frequencies up to 90 GHz, but with a loss performance greatly improved over coaxial cables.

As we see in the prior photos, higher frequencies often require the use of rigid, metallic waveguides. Samtec’s technology provides an alternative solution that is flexible, easier to use, and lower cost, while also maintaining the near-loss performance of a traditional rigid waveguide. This combination of features is unique and will open the door to new product designs.

Samtec recently demonstrated multiple applications of its new flexible waveguide technology at IMS 2023. You can check out this blog that describes the event. There is a 2.5-minute video in that blog that is worth watching. You will see the applications in action and get to see some impressive real-time statistics as well.

To Learn More

Samtec is focused on delivering world-class signal integrity. You can learn more about this corporate obsession here. The new, flexible waveguide products are another way Samtec delivers on this promise. I can’t wait to see what new products are inspired by this capability.

Over the next few weeks Samtec will deliver a lot more information about this new product line. Things like:

  • Series information: WF12 (E-band waveguide), WGBA (adaptor), WF15 (V-band waveguide)
  • Product overview
  • Catalog page
  • Prints
  • Videos
  • And more

If you’re interested in learning more, bookmark this page and check on it from time to time over the next few weeks.  You won’t be disappointed.  And that’s how Samtec innovates a new approach for high-frequency analog signal propagation.

Also Read:

Signal Integrity 101: Fundamentals for Professional Engineers

PCI-SIG DevCon and Where Samtec Fits

Samtec Lights Up MemCon

Samtec Dominates DesignCon (Again)

 

 

 

 


Bluetooth Based Positioning, More Accurate, More Reliable, Safer

Bluetooth Based Positioning, More Accurate, More Reliable, Safer
by Bernard Murphy on 08-22-2023 at 6:00 am

Using Bluetooth for positioning is a topic I have touched on before, for location services, keyless entry, and asset tracking among other applications. Earlier implementations depended on measures of received signal strength and angle-of-arrival / angle-of-departure, but these have limited accuracy in environments with complex reflection paths such as a parking lot or indoor environments. They also have limited security, driving implementations paired with UWB for keyless entry though not necessarily fixing security problems in the Bluetooth path. The Bluetooth SIG felt they could do more with BLE and continues to move towards a better solution.

Channel sounding for high accuracy secure positioning

Channel sounding in the cellular world is a technique commonly thought of for characterizing channels for MIMO platforms. It was also popular in early radio-based navigation systems. These methods leverage round-trip signaling across multiple channels, using phase difference or time of flight to minimize multi-path effects or to increase positioning accuracy.

The Bluetooth SIG is working on standardized channel sounding in an upcoming release, allowing solution builders to leverage the ubiquity and low power advantages of BLE to further extend value to multiple applications. By combining estimates across multiple channels, channel sounding can achieve position accuracies down to tens of centimeters.

This capability can offer significant improvements over current keyless entry systems for cars both in accuracy and in reliability in congested environments such as a parking lot. Even more important, improved security in Bluetooth channel sounding will defeat relay attacks which have been demonstrated quite recently. Such attacks allow an attacker to trick a victim car into opening locks.

Channel sounding will also be valuable in asset tracking, say in a warehouse where small packages may still be difficult to locate without sufficient positioning accuracy. The same capability could also become very popular for indoor wayfinding, for example in large malls or other buildings.

Opportunity

Market analysts forecast that over 500 million Bluetooth-based low power positioning devices will ship annually by 2030, based on a CAGR of 28.5% from 2022 to 2030. The keyless entry subset of this market also demands the high security offered by this planned release to defeat distance-spoofing and relay attacks and is estimated to reach $5.5B by 2032, at a CAGR of 12.8%. Meanwhile the asset tracking IoT market wants precision positioning to optimize warehouse operations. This market is expected to grow to $8.5B by 2030.

CEVA and channel sounding

CEVA has been a well-established and respected supplier of embedded Bluetooth technologies over the last 20 years through their RivieraWaves family supporting both classic and LE profiles. Since the official channel sounding option is still in discussion in the Bluetooth SIG, CEVA decided to get ready early with a release of their own, aiming to track the anticipated standard as closely as possible. Not a bad idea. That option should allow builders to refine product ideas, ensuring they will be ready when the standard is ratified and solutions are qualified.

You can read the press release HERE and learn more  about CEVA’s Bluetooth connectivity solutions.


How Intel, Samsung and TSMC are Changing the World

How Intel, Samsung and TSMC are Changing the World
by Mike Gianfagna on 08-21-2023 at 10:00 am

How Intel, Samsung and TSMC are Changing the World

Given the changes in the music business, the term “Rock Star” doesn’t really have any relevance to music or its performers anymore.  Instead, we use the term to describe leaders, innovators and generally people or organizations of great significance. In the world of semiconductors, the designers of advanced chips were the rock stars for a long time. Those who put those chips in packages were regarded as the clean-up crew. A roadie for the rock star at best.

Thanks to the coming revolution of multi-die design, packaging is now a fundamental technology driver and advanced packaging engineers are now the rock stars. These trends promise to change the semiconductor industry and the world. SemiWiki recently received some compelling data on this topic. The sources of the data are just as interesting as the data itself. Read on to understand how Intel, Samsung and TSMC are changing the world.

The Data, Who is Watching What

This all began with an email from The Bulleit Group entitled Intel Stock Down, Why TSMC Might Be Responsible. The Bulleit Group, in its own words, was founded in 2012 by Kyle Arteaga and Alex Hunter over a glass of Bulleit Bourbon (no relation). Once I read that, I had to learn more. This is a tech agency with a twist – a singular focus on what’s next, how to get there and what it means. The company’s rotating home page graphic illuminates its mission.

We tell stories about:

  • the future
  • frontier technology
  • sci-fi becoming reality
  • a better world
  • challenging the status quo
  • mavericks
  • the nexus of technology and culture

The punch line is:

Throughout the past ten years, technology has changed everything about the way we live. We’re focused on the next ten.

I found it gratifying that a forward-looking, award-winning organization like this was interested in semiconductor packaging.  But this isn’t the end of the story. The Bulleit Group was writing to share information it had received from LexisNexis, another catchy name I hadn’t heard of.

LexisNexis is an intellectual property solutions provider. The company’s tagline is Bringing Clarity to Innovation. In its own words, we are proud to directly support and serve (innovators) in their endeavors to better humankind.  Another award-winning and unique organization with a global perspective. And their team is focused on semiconductor packaging. Life is good.

The Data, What it Means

Let’s look at what LexisNexis is saying. Since the organization focuses on IP, a patent analysis is in its wheelhouse. This analysis was based on 37,779 patent families active on 07/20/2023. That’s a lot of data to analyze. The results are quite interesting. Below are the top ten patent producers.

Top ten patent producers

TSMC, Samsung and Intel are clearly in the lead. The Bulleit Group summarizes this data as follows:

LexisNexis discusses the different approaches of semiconductor companies regarding advanced packaging, with Intel focusing on high-performance computing, for example, Samsung targets high-volume assembly, and TSMC aims to capture a wide range of trends from low-cost to high-performance computing. In addition, these topics are not only important to the manufacturers above, but these topics are also relevant to fabless companies such as AMD, Apple, Broadcom, Nvidia, Qualcomm, etc., particularly in the continuing demand for AI-enabled technologies.

Reuters covered these trends in a recently published story. The article commented, “Advanced packaging is crucial for improving semiconductor designs as it becomes more difficult to pack more transistors onto a single piece of silicon. Packaging technology enabled the industry to stitch together several chips called “chiplets” – either stacked or adjacent to one another – within the same container.” Once again, the mainstream media has taken notice of significant, world-changing trends in semiconductors. Honestly, this feels quite good.

They seem to be the ones that pulled the field forward, and set the technology standard,” said LexisNexis PatentSight Managing Director Marco Richter in an interview, referring to TSMC, Samsung and Intel.

Additional insights from LexisNexis illustrate the substantial growth of the advanced packaging sector. See below. Back to that rock star comment.

Advanced packaging trends

To Learn More

If you’re interested in digging deeper, here are two reports from LexisNexis that may be of interest:

Innovation Momentum 2023: The Global Top 100

Exploring the Global Sustainable Innovation Landscape: The Top 100 Companies and Beyond

The second report dives into the links between sustainability and technology innovation. And that’s how Intel, Samsung and TSMC are changing the world.

Also Read:

Intel Enables the Multi-Die Revolution with Packaging Innovation

TSMC Redefines Foundry to Enable Next-Generation Products

VLSI Symposium – Intel PowerVia Technology

TSMC Doubles Down on Semiconductor Packaging!


Enhanced Stochastic Imaging in High-NA EUV Lithography

Enhanced Stochastic Imaging in High-NA EUV Lithography
by Fred Chen on 08-21-2023 at 8:00 am

Enhanced Stochastic Imaging in High NA EUV Lithography

High-NA EUV lithography is the anticipated new lithography technology to be introduced for the 2nm node. Essentially, it replaces the 0.33 numerical aperture of current EUV systems with a higher 0.55 numerical aperture (NA). This allows the projection of smaller spot sizes and smaller pitches, roughly 60% smaller compared to 0.33 NA systems. However, the depth of focus of the projected image is limited, due to being inversely proportional to the square of numerical aperture and directly proportional to the square of resolution. A wider numerical aperture leads to a wider range of illumination angles. This, in turn, leads to a larger phase difference from defocus among different spatial frequency components of the image, such as diffraction orders for an array.

Figure 1. Allowed illumination angles for 28 nm pitch square array, within the first quadrant of the 0.55 NA pupil. The number labels indicate the maximum phase range (in degrees) among the four beams (diffraction orders) making up the image, for a 30 nm defocus. The blue trapezoid outline indicates the zone for minimum 20% pupil fill (5% in one quadrant) to prevent light absorption within the illuminator.

Figure 1 shows the range of phases among the four diffraction orders of a 28 nm square pitch pattern for a 0.55 NA EUV system. The phase range already exceeds 90 degrees, which leads to a maximal intensity change for at least one of the four orders. 30 nm defocus is prohibitive with 20% pupil fill. Resist thickness is therefore expected to be limited to on the order of 20 nm. This leads to an expected absorption of ~10% in organic resists with absorption coefficients of 5/um [1,2]. 0.33 NA systems use resist thicknesses at least twice as thick, allowing absorption of at least ~20%. Thus, the 0.55 NA systems have higher risk of stochastic effects in imaging.

Figure 2. 14 nm dense (28 nm pitch) spot formed as a positive tone darkfield image, assuming 10 mJ/cm2 (absorbed over the whole pitch, >100 mJ/cm2 incident on 20 nm thickness). The nominal image is on the left, while the actual stochastic image is on the right. The numbers indicate the photons absorbed per 0.5 nm x 0.5 nm pixel.

For a 14 nm half-pitch spot in the organic resist case (Figure 2), the absorption outline is rough with very uncertain edge placement, while the interior has numerous areas where no absorption occurs.

Metal oxide resists have absorption coefficients on the order of 20/um (33% absorption in 20 nm thickness) [1,2], so they can provide more photon absorption. However, they are negative tone. That means a bright spot forms a pillar, while a dark spot forms a hole.

Figure 3 14 nm dense (28 nm pitch) spot formed as a negative tone brightfield image, assuming an absorbed dose of 35 mJ/cm2 (>100 mJ/cm2 incident on 20 nm thickness). The nominal image is on the left, while the actual stochastic image is on the right. The numbers indicate the photons absorbed per 0.5 nm x 0.5 nm pixel.

Even for the higher absorbing metal oxide resist (Figure 3), the outline is still very rough and there even appear to be nano-extensions of absorption toward adjacent spots, further blurring the edge. The reason is the lower edge contrast along the horizontal and vertical directions (for 28 nm square array pitch) means stochastic dose fluctuations have a larger opportunity to cross the printing threshold.

0.55 NA EUV imaging therefore requires even higher absorption than 0.33 NA EUV imaging, and this means much more absorptive resists than have already been studied. One other factor not yet considered is electron blur, as increased absorption also means more electrons moving around in the resist. This needs to be covered in a future study.

References

[1] R. Fallica et al., Proc. SPIE 10143, 101430A (2017).

[2] D. De Simone et al., Proc. SPIE 10143, 101430R (2017).

This article first appeared in LinkedIn Pulse: Enhanced Stochastic Imaging in High-NA EUV Lithography

Also Read:

Application-Specific Lithography: Via Separation for 5nm and Beyond

ASML Update SEMICON West 2023

NILS Enhancement with Higher Transmission Phase-Shift Masks