ads mdx semiwiki building trust gen 800x100ai

Is Ansys Reviving the Collaborative Business Model in EDA?

Is Ansys Reviving the Collaborative Business Model in EDA?
by Daniel Nenni on 12-16-2021 at 10:00 am

Evolution of Multiphysics Complexity

The Electronic Design Automation (EDA) industry used to be a bustling bazaar of scrappy startups, along with medium sized companies that dominated a technology space, and big main-line vendors. The annual Design Automation Conference was noisy, hectic, and sprawled over multiple large convention halls. This diversity meant that designers needed to stitch together their chip design flows with point tools from many software tool vendors. As a consequence, design companies all set up dedicated internal methodology teams (or ‘CAD teams’) to evaluate, set up, integrate, and maintain a suite of design software tools for their chip design teams.

That all changed with the strong consolidation that swept through EDA in the early 2000’s. This change mirrored the consolidation experienced across all sectors of the semiconductor industry, including silicon manufacturers, fab equipment vendors, and chip design companies themselves. The EDA industry now counts only 4 major vendors that make up the bulk of the electronic design software market: Synopsys ($3.7B), Cadence Design Systems ($2.7B), Siemens EDA (~$1.8B), and Ansys ($1.7B).

One casualty of this consolidation drive was the abandonment of the open, collaborative business model espoused by the earlier EDA companies. Instead, a closed garden mentality took over that strove to put in place “full flow”, single vendor, exclusive contracts. Despite some limited success, this approach never really succeeded, especially at the major semiconductor houses that provide the bulk of EDA revenues.

There are two major reasons for the failure of this model: Firstly, customers prefer not to tie themselves to a single vendor and lose their leverage in commercial negotiations. But, economics aside, it was always a technical non-starter. The reality is, and always has been, that no single vendor provides competitive technical solutions for the complete range of requirements from major semiconductor customers. This fact has become even more salient with the rapid technical evolution of both Moore’s Law and More-than-Moore that is leading to radical change in design challenges:

  • Ultra-low voltage, high speed silicon processes blur the line between analog and digital – high speed interconnect on interposers now routinely requires detailed electromagnetic field analysis. And Dynamic Voltage Drop now contributes about 30% to total path timing at 7nm and below.
  • 3D-IC multi-die systems and chiplets have blurred the lines between IC and PCB design techniques.
  • Power dissipation has become the number 1 issue for many applications and has blurred the lines between chip and package design. 3D-IC and chiplet designers at the early floorplanning stage now need to worry about thermal management, cooling, heat sinks, and concerns over mechanical stress/warpage reliability.

The result has been a resurgence in the realization that chip design is an incredibly complex multiphysics problem and that no single company has the breadth and depth of technology to solve it all. Ansys, for one, has embraced this reality by leading the industry in reviving the traditional open platform approach to EDA. They have vigorously pursued collaborations, partnerships, and joint developments with other vendors to address deep technical issues facing designers and create unique cross-disciplinary solutions.

The range of Ansys’ collaborations reflects the already broad range of engineering analysis tools it sells. An early step down this road started in 2017 when Ansys and Synopsys partnered to integrate Ansys RedHawk-SC power integrity analysis natively inside Synopsys’ Fusion Compiler implementation product. This collaboration has deepened with the release of Synopsys 3DIC Compiler that relies on Ansys RedHawk-SC Electrothermal for thermal and interposer analysis of 3D-ICs.

Ansys has also collaborated with Siemens EDA to deliver a direct link between Siemens’ Veloce hardware emulator and Ansys PowerArtist RTL power analysis tool. This push towards collaboration was on full display at the recent IDEAS Forum hosted by Ansys where we saw keynote speeches by Tom Lillig, Technology Business Leader at Keysight,  Siva Yerramilli, corporate VP for Strategy and System Architects at Synopsys, and Ted Pawela, chief Ecosystem Officer at Altium. There was also a presentation by Gilles Lamant from Cadence Design Systems on joint optical solutions. This is an unprecedented range of competing companies that nevertheless see value in coming together to address specific problems for their customers and I believe it may herald the revival of a more cooperative business trend in building viable electronic design flows.

Ansys has embraced this market development with its own internal reorganization that saw the merger of its Semiconductor division and Electronics division under the leadership of John Lee, GM Electronics and Semiconductor Business Unit. John is a strong proponent of providing open platforms to allow the broadest array of design tools to work together and exchange data. Under his leadership, Ansys has broadened its relationship with Synopsys, shifted its own development priorities to embrace open platforms, and has reached out to complementary tool providers to create industry solutions for Ansys’ diverse customer base. I think this is an interesting trend that may well benefit the EDA industry in general.

Also Read

A Practical Approach to Better Thermal Analysis for Chip and Package

Ansys CEO Ajei Gopal’s Keynote on 3D-IC at Samsung SAFE Forum

Ansys to Present Multiphysics Cloud Enablement with Microsoft Azure at DAC


Ramping Up Software Ideas for Hardware Design

Ramping Up Software Ideas for Hardware Design
by Bernard Murphy on 12-16-2021 at 6:00 am

Bridging chasm

This is a topic in which I have a lot of interest, covered in a panel at this year’s DAC; Raúl Camposano chaired the session. I had earlier covered a keynote by Moshe Zalcberg at Europe DVCon late in 2020; he now reprises the topic. Given the incredible pace of innovation and scale in software development these days, I don’t see what we have to lose in looking harder for parallels. And ramping up software ideas for hardware design.

Moshe Zalcberg on why we should think about this

Moshe makes the point that chip design is outrageously expensive, and designers are understandably averse to risky experiments. But as design continues to become even more outrageously expensive, the downside of not looking for new ideas may become even more compelling.

He cites relatively slow change in for example verification methodologies versus more rapid evolution in mobile phone technologies, semiconductor processes and most popular software languages. We’re staying level on verification effort and respins as complexity continues to grow but he wonders if we could do better. Competition isn’t only with complexity; we’re also competing with each other. Any team that is able to find significant advantage in some way will jump ahead of the rest of us. Yes, change is risky but so also is stasis.

He suggests a range of ideas we might borrow from the software world, from open-source to Python as a language (for test especially), to Agile, continuous integration and deployment (CI/CD), leveraging data more effectively and of course AI. Tentative steps are already being taken in some areas; we always need to be thinking about what we might borrow from our software counterparts.

Rob Mains on Open-Source Chip Design

I hear a lot of enthusiasm for open-source EDA, but what about open-source design? The RISC-V ecosystem is showing this can work. Rob Mains is executive director at Chips Alliance, whose mission is to encourage collaboration and open-source practices in hardware. Chip Alliance is part of the Linux Foundation which is a good start. They have heavyweight support from Google, Intel, SiFive, Alibaba and a lot of other companies and universities.

Rob sees a primary focus in promoting an open ecosystem, through for example standard bus protocols like OpenXtend and the Advanced Interface Bus between chiplets. He also sees opportunity for certain open-source EDA directions which could change the game, for example an Open PDK infrastructure. In this spirit he mentioned also Chisel and Rocketchip. Also the BAG family of generators from Berkeley, the FASoC family of tools from University of Michigan and layout synthesis from UT Austin.

Rob has some interesting predictions for this decade, for example that 50% or more design will be open source based and that design entry to implementation will no longer require human intervention. Bold claims. Viewed as a moonshot I’m sure they’ll drive some interesting progress.

Neil Johnson on Agile Design

Neil Johnson, now at Siemens, is a very accomplished thinker and speaker in this domain. He has embraced Agile and related methods whole heartedly yet accepts that he lives in a world of skeptics who “don’t buy any of this Agile nonsense”. He starts with his own ten-year journey in Agile, a testament to his credibility in this domain. That he follows with a poem he wrote titled “Your Agile is for Chumps”. This is a gentle but persuasive walk through counterarguments to the opposition he has heard to Agile methods.

I won’t ruin the experience by attempting to summarize this presentation. You should really watch the video (link below). I will say that he had me convinced, not by beating me over the head with claims that my arguments are wrong but by gentle reasoning that there’s a different way to look at the components of agile. And that perhaps traditional approaches may not be as solid as we think.

Vicki Mitchell on MLOps

This talk, presented by Vicki Mitchell, may require a couple of cognitive jumps for most of us. First you need to understand what DevOps is in the software world. According to AWS “this is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.” In other words, not the end software products but all the infrastructure and ecosystems that support the development of those products. These concepts are creeping into hardware design through adoption of tools like Jama, Jenkins and others. Vicki has presented multiple times on the value of DevOps practices in hardware design.

Now think about that philosophy for ML, particularly as used in adoption of ML in design practices. Hang on tight; this does make sense, but it is mind-bending. Vicki presents it as putting data and machine learning together. The summary I find easiest to understand is that use of ML in design cannot depend on a one-time training activity. It must continuously improve as new designs are encountered and new data is generated. MLOps is a way to make ML adjust flexibly yet robustly to this landscape of changing data, requirements and quite possibly models.

When ML becomes a part of even a waterfall flow with regressions, or CI/CD flows, it must fit into the DevOps flow. It should fit into CI/CD, automated testing, pipelining. So that failing or slow components don’t roadblock the whole flow as tests, design data and constraints change. In CI/CD flows, everything in the flow must adapt to supporting continuous integration and be continuously deployable. There’s a lot more good stuff here and in all the talks. Watch the video.

Finally a shout-out to Raúl, my partner with Paul Cunningham on the Innovation in Verification blogs. He started with a remembrance of Jim Hogan, who we all miss. Raúl asked several insightful questions at the end of each talk. This blog would run to many thousands of words if I did justice to his question and each of the talks. Again, watch the video!

Also Read:

Verification Completion: When is enough enough?  Part I

Verification Completion: When is Enough Enough?  Part II

On Standards and Open-Sourcing. Verification Talks


Top 10 Takeaways from DAC 2021

Top 10 Takeaways from DAC 2021
by Tom Dillinger on 12-15-2021 at 2:00 pm

stopped clock license model

The “in-person” portion of the Design Automation Conference (DAC) was recently held in San Francisco.  (As several presenters were unable to attend, a “virtual” program is also available.)  The presentations spanned a wide gamut – e.g., technical advanced in design automation algorithms;  new features in commercial EDA tools;  technical and financial trends and forecasts;  and, industry standards activities.

In recent years, the DAC Organizing Committee has expanded the traditional algorithm/tool focus to include novel IP, SoC, and system design techniques and methodologies.  The talks in the Design and IP Track provided insights into how teams are addressing increasing complexity afforded by new silicon and packaging technologies, as well as ensuring more stringent requirements on reliability, security, and safety are being met.

Appended below is a (very subjective) list of impressions from DAC.  It is likely no surprise that several of these refer to the growing influence of machine learning (ML) technology on both the nature of chip designs and the EDA tools themselves.  The impact of cloud-based computational resource was also prevalent in the trend presentations, as well.  Here are the Top 10 takeaways:

(10)  systems companies and EDA requirements

Several trend-related presentations highlighted the investments being made by hyperscale data center and systems companies in internally staffing SoC design teams – e.g., Google, Meta, Microsoft, Amazon, etc.  A panel discussion that asked representatives from these companies “What do you need from EDA?” could be summed up in four words:  “bigger, faster emulation systems”.

(Parenthetically, one rather startling financial forecast was, “50% of all EDA revenue will ultimately come from systems companies.”)

(9)  domain-specific architectures

The financial forecast talks were uniformly upbeat (see (8)) – hardly a financial bear in sight.  The expectation is that (fabless, IDM, and systems) IC designers will increasingly be seeking to differentiate their products by incorporating “domain-specific architectures” as part of SoC and/or package integration.  As will be discussed shortly, the influence of ML opportunities to add to product features is a key driver for DSA designs, whether pursuing data center training or data center/edge inference.

The counter-argument to DSA designs is that ML network topologies continue to evolve rapidly (see (6)).  For data center applications, a general-purpose programmable engine, such as a GPGPU/CPU with a rich instruction set architecture may provide more flexibility to quickly adapt to new network types.  A keynote speaker provided the following view:  “It’s a tradeoff between the energy costs of computation versus data movement.  If a general-purpose (GPU) architecture can execute energy-intensive MAC computations for complex data types, the relative cost of data movement is reduced – no need for specialized hardware.”

(8)  diverse design starts

A large part of the financial optimism is based on the diversity of industries pursuing new IC designs.  The thinking is that even if one industry segment were to stall, other segments would no doubt pick up the slack.  The figure below illustrates the breadth in design starts among emerging market segments.

As the EDA industry growth relies heavily on design starts, their financial forecasts were very optimistic.

(7) transition to the cloud

Another forecast – perhaps startling, perhaps not – was “50% of all EDA computing cycles will be provided by cloud resources”.

The presenter’s contention was that new, small design companies do not have the resources or the interest in building an internal IT infrastructure, and are “more open to newer methods and flows”.

Several EDA presentations acknowledged the need to address this trend – “We must ensure the algorithms in our tools leverage multi-threaded and parallel computation approaches to the maximal extent possible, to support cloud-based computation.” 

Yet, not everyone was convinced the cloud transition will proceed smoothly…  read on.

(6)  “EDA licensing needs to adopt a SaaS model”

A very pointed argument by a DAC keynote speaker was that EDA licensing models are inconsistent with the trend to leverage cloud computing resources.  He opined, “A stopped watch is correct twice a day – similarly, the amount of EDA licenses is right only twice in the overall schedule of a design project.  The rest of the IT industry has embraced the Software as a Service model – EDA companies need to do the same.”

The figure below illustrates the “stopped watch licensing model”.

(The opportunity to periodically re-mix license quantities of specific EDA products in a multi-year license lease agreement mitigates the issue somewhat.)  The keynote speaker acknowledged that changing the existing financial model for licensing would encounter considerable resistance from EDA companies.

(5)  ML applications

There were numerous presentations on the growth anticipated for ML-specific designs, for both very high-end data center training/inference and for low-end/edge inference.

  • high-end data center ML growth

For ML running in hyperscale data centers, the focus remains on improving the classification accuracy for image and natural language processing.  One keynote speaker reminded the audience, “Although AI concepts are decades old, we’re really still in the very early stages of exploring ML architectures for these applications.  The adaptation of GPGPU hardware to the ML computational workload really only began around 10 years ago.  We’re constantly evolving to new network topologies, computational algorithms, and back-propagation training error optimization techniques.”

The figure below highlights the complexity of neural network growth for image classification over the past decade, showing the amount of computation required to improve classification accuracy.

(The left axis is the “Top 1” classification match accuracy to the labeled training dataset.  One indication of the continued focus on improved accuracy is that neural networks used to be given credit for a classification match if the correct label was in the “Top 5” predictions.)

  • low-edge/edge ML growth

A considerable number of technical and trend presentations focused on adapting ML networks used for training to the stringent PPA and cost requirements of edge inference.  High-precision data types for weights and intermediate network node results may be quantized to smaller, more-PPA efficient representations.

One presenter challenged the audience with the following scenario. “Consider Industrial IoT (IIoT) applications, where sensors and transducers integrated with low-cost microcontrollers provide real-time monitoring.  In many cases, it’s not sufficient to simply detect a vibration or noise or pressure change or image defect that exceeds some threshold – it is necessary to classify the sensor output to a specific pattern and respond accordingly.  This is ideally suited to the use of small ML engines running on a corresponding microcontroller.  I bet many of you in the audience are already thinking of IIoT ML applications.”

(4)  HLS and designer productivity

There were several presentations encouraging design teams to embrace higher levels of design abstraction, and correspondingly high-level synthesis, to address increasing SoC complexity.

Designers were encouraged to go to SystemC.org to learn of the latest progress in the definition of the SystemC language standard, and specifically, the SystemC synthesizable subset.

(3)  clocks

Of all the challenges faced by design teams, it was clear from numerous DAC presentations that managing the growing number of clock domains in current SoC designs is paramount.

From an architectural perspective, setting up and (flawlessly) exercising clock domain crossing (CDC) checks for proper synchronization is crucial.

From a physical implementation perspective, developing clock cell placement and interconnect rouging strategies to achieve latency targets and observe skew constraints is exceedingly difficult.  One insightful paper highlighted the challenges in (multiplexed) clock management and distribution for a PCIe5 IP macro.

Increasingly, physical synthesis flows are leveraging “useful skew” between clock arrival endpoints as another optimization method to address long path delays (and, as an indirect benefit, to distribute instantaneous switching activity).  A compelling DAC paper highlighted how useful skew indeed helps close “late” timing, but may aggravate “early” timing paths, necessitating much greater delay buffering to fix hold paths.  The author described a unique methodology to identify a combination of useful skew implementations to adjust both late and early clock arrival endpoints to reduce hold buffering, saving both power and block area.

Static timing analysis requires diligent attention to clock definitions and timing constraints – multiply that effort for multi-mode, multi-corner analysis across the range of operating conditions.  One presentation focused on the need to focus on improved methods to characterize and analyze timing with statistical variation.  In the future, it will become more common to tell project management that “the design is closed to n-sigma timing”.

(2)  ML in EDA

There was lots of interest in how ML techniques are influencing EDA tools and flows.  Here are some high-level observations:

  • ML “inside”

One approach is to incorporate an ML technology directly within a tool algorithm.  Here was a thought-provoking comment from a keynote talk:  “The training of ML networks takes an input state, and forward calculates a result.  There is an error function which serves as the optimization target.  Back propagation of partial derivatives of this function with respect to existing network parameters drives the iterative training improvement.  There are analogies in EDA – consider cell placement.”

The keynote speaker continued, “The current placement is used to calculate a result comprised of a combination of total net length estimates, local routing congestion, and critical net timing.  The goal is to optimize this (weighted) result calculation.  This is an ideal application to employ ML techniques within the cell placement optimization algorithm.”

  • ML “outside”

Another methodology approach is to apply ML techniques “outside” an existing EDA tool/algorithm.  For example, block physical implementation is an iterative process, from initial results using early RTL through subsequent RTL model releases.  Additionally, physical engineers iterate on a single model using various combinations of constraints provided throughout the overall flow, to evaluate QoR differences.  This accumulation of physical data over the development cycle can serve as the (design-specific) data set for ML training, helping the engineer develop an optimal flow.

(1)  functional safety and security

Perhaps the most challenging, disruptive, and nonetheless exciting area impacting the entire design and EDA industry is the increasing requirement to address both functional safety and security requirements.

Although often mentioned together, functional safety and security are quite different, and according to one DAC presenter “may even conflict with each other”.

FuSa (for short) refers to the requisite hardware and software design features incorporated to respond to systematic and/or random failures.  One presenter highlighted that the infrastructure is in place to enable designers to identify and trace the definition and validation of FuSa features, though the ISO 26262 and IEC 61508 standard structure, saying, “We know how to propagate FuSa data through flows and the supply chain.  Correspondingly, we have confidence in the usage of software tools.”  Yet, a member of the same panel said, “The challenge is now building the expertise to know where and how to insert FuSa features.  How do you ensure the system will act appropriately when subjected to a random failure?  We are still in the infancy of FuSa as an engineering discipline.”

The EDA industry has responded to the increasing importance of FuSA developments, by providing specific products to assist with ISO 26262 data dependency management and traceability.

Security issues have continued to arise throughout our industry.  In short, security in electronic systems covers:

  • side channel attacks (e.g., an adversary listening to emissions)
  • malicious hardware (e.g., “Trojans” inserted in the manufacturing flow)
  • reverse engineering (adversaries accessing design data)
  • supply chain disruptions (e.g., clones, counterfeits, re-marked modules;  the expectation is that die will be identified, authenticated, and tracked throughout)

The design implementation flow needs to add security hardware IP to protect against these attack “surfaces”.

Here’s a link to another SemiWiki article that covers in more detail the activities of the Accellera Security for Electronic Design Integration working group to help define security-related standards and establish a knowledge base of progress in addressing these issues – link.

To me, the impact of product FuSa and security requirements will have pervasive impacts on system design, IP development, and EDA tools/flows.

Can’t wait for the next DAC, on July 10-14, 2022, in San Francisco.

-chipguy


Semicon West is Semicon Less

Semicon West is Semicon Less
by Robert Maire on 12-15-2021 at 10:00 am

Semicon West 2021
  • Semicon West was Semicon Less- Less Customers & Vendor
  • Everyone is busy as can be, maybe too busy to attend
  • Those who were there, talk about supply chain issues & stress
  • How long does the party last & where the money comes from?

Semicon West was Semicon Less….

We attended a “Hybrid” version of Semicon West last week. A combination of on line and in person and we choose to go in person.

There were fewer booths, the smallest show we have ever seen.

Most of all the major tool makers were “no shows” with the sole exception of TEL (Tokyo Electron). MIA were ASML, Applied & Lam. KLA had a teeny, tiny booth of zero consequence.

Also missing were the off site hotel suites where the real meetings actually take place.

Many of the booths were very low level sub suppliers to the industry of rubber gloves and bits and pieces.

Perhaps more importantly was the total absence of significant customers. Both Intel and Micron were MIA. They usually make up the large part of attendance. There were obviously very few foreign customers due to travel issues.

Coincident to Semcon West was the DAC (Design Automation Conference ) in the Moscone West, which we also attended. Though attendance was down, the number of booths and size of the conference was closer to normal given the US focus of the EDA industry. Obviously there was little talk of supply chain issues at DAC.

“Whack a Mole”

Most everyone in the tool business that we spoke to at the show are pre-occupied playing “Whack a Mole” AKA “What part are we short of today?”.

We have suggested several months ago that shortages would get worse before getting better as inventories dry up and problems ricochet through the supply chain.

We saw that come true with AMAT missing $300M of revenue.
We will likely see other companies who have kept it together until this point start to experience some impacts with those impacts varying

Demand remains super strong with backlog stretching

If anything , the limited supply and supply chain concerns have kept companies busy placing orders which in large part seem to be driven by fear of not being able to get tools or parts at a later date.

How much of this is double or triple ordering or just “positional” ordering is hard to say.

Companies claim to be trying to make sure the orders are real by asking for cash deposits or other assurances but its certainly impossible to guarantee is things go off a cliff.

Companies that make unique products or are unique suppliers of technology not available from competitors sound like they have the most backlog.

We heard rumors at the show of certain KLA products with 30 month lead times.

It seems as if ASML has an order book that is so full, not just in EUV, that they will likely be lens constrained for several years to come as conservative Zeiss is not substantially expanding capacity.

We wonder how much market share shift will happen in less critical, commodity like applications such as non critical etch or deposition where there may be multiple suppliers that are capable.

Right now with supply being tight it may be less of an issue. We would think that this may favor the smaller or local suppliers.

When does this all end and how?
Where does the money come from?

Most manufacturers are far to busy with supply chain issues to either worry or care about how long and strong the current cycle is.

The bigger question that seems higher on the priority list is where does all the money come from?

If you try to even straight line the current demand, the capital spend goes to infinity and beyond.

Even if Chips for America gets passed it barely a drop in the bucket. Issues with Chinese companies and their ability to access capital remain.

Alibaba bidding on Tsinghua is not very reassuring….but somehow the orders keep flowing from China.

Selling Mobileye to raise money…Good idea

We applaud Intel for selling off Mobileye. We thought is was a huge distraction to their core business. They got super lucky in that they will likely get a bit more than they paid by the spin.

Our main concern is the amount fo time it will take to extract all the money.
Going the IPO route maximizes the return but also takes the longest and Intel needs the money to buy tools and fabs right now….not 5 years from now.

If you add up all the spend that Intel has announced its very far in excess of $100B. Mobileye will make a dent but not that big a dent in their spend. If they don’t get the biggest slug of the “Chips for America” hand out I don’t know where the money will come from.

The Stocks

Right now everyone is both very happy and very busy. There will likely be more earnings and revenue disruptions due to supply chain issues when Q4 is reported but for the most part everyone will report good news.

We don’t think supply chain issues will have broad enough impact in the industry to cause the stocks to slow.

We would likely focus on those with the most immunity to share loss or order loss such as ASML & KLA, but everyone is doing well.

Some of the smaller players could pick up share but right now most chipmakers are too busy trying to get existing tools than try out new tools from second tier vendors.

Materials and consumables also seems like a safe haven as production continues to be quite strong without significant risk to the downside.

Although there weren’t any “good times” parties at Semicon West as in previous strong cycles, the fact that everyone is busy remains reassuring for now.

Also Read:

Supply Chain Breaks Under Strain Causes Miss, Weak Guide, Repairs Needed

KLAC- Foundry/Logic Drives Outperformance- No Supply Chain Woes- Nice Beat

Intel – “Super” Moore’s Law Time warp-“TSMC inside” GPU & Global Flounders IPO


PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions
by Kalar Rajendiran on 12-15-2021 at 6:00 am

Avery PCIe Speed Adapter IP at Work

We live in the age of big data. No matter how fast and complex modern SoCs are, it all comes down to how quickly data can get in and out that determines the system performance. And, there is a lot of data that today’s systems need to process. Naturally, system interfaces such as PCIe, DDR, HBM, etc., have been evolving rapidly too, to support faster and faster data transfer speeds. Just recently, PCIe 5.0 and HBM2E started getting supported and already there is an expectation for next gen speeds to be supported within a year or so.

With such a high-speed race to support big data needs, SoCs are naturally being designed and developed to incorporate these next gen interfaces. The big question is, how to validate these SoCs which are designed to support advanced interfaces? Typically, prototyping solutions are used to validate an SoC design instead of waiting for the manufactured SoC. Prototyping solutions have been in the news a lot lately. A widely used prototyping approach is based on FPGA-based platforms. While FPGA with its flexibility lends itself nicely for implementing prototypes of very complex, high-performance SoCs, FPGAs do trail in terms of supporting the most advanced interface speeds.

Last week at DAC, Avery Design Systems announced speed adapters to FPGA prototyping solutions for validating data center and AI/ML SoCs. You can access the press release here. Avery has developed PCIe and memory speed adapters that can be synthesized into S2C’s FPGA prototyping platforms to support up to PCIe 6.0, HBM3 and LPDDR5 protocol interfaces. I had an opportunity to speak with Chris Browy and Ying Chen regarding this solution announcement. Chris is VP sales and marketing of Avery Design Systems and Ying is VP sales and marketing of S2C. The following is a synthesis of my discussion along with some highlights from the press release.

Current Limitation of FPGA Prototyping Solutions

As a global leader of FPGA prototyping solutions, S2C has been delivering rapid prototyping solutions since 2003. S2C offers many different product lines leveraging both Intel and Xilinx FPGAs. Of late, their customers have been experiencing limitations when it comes to validating their next gen SoCs and ASIC designs. These limitations are tied to the capabilities of current FPGAs. For example, the Xilinx Virtex UltraScale+ series is only PCI-e 3.0 compliant. While their more advanced FPGAs can support the latest PCIe speeds,  they cannot yet fit large SoCs. Thus, customers are constrained by either capacity or lack of support for advanced interface speeds.

Overcoming the Limitation

In order to test if an SoC design is compatible, say in a PCIe Gen5 or Gen6 type of system, there needs to be a speed adapter that would support the native interface protocols albeit at scaled frequencies to match the design running in the FPGA prototype system. The speeds need to be bridged between the real host system and the FPGA prototype which will be operating at a lower speed. A similar approach is needed for bridging memory interfaces speeds.

As a leader in functional verification solutions, Avery Design Systems has been enabling system and SoC design teams to achieve dramatic productivity improvements since 1999. With the speed adapters announcement, they have expanded their product portfolio to enable system validation of SoCs that incorporate the most advanced interconnect and memory technologies. As a result of a partnership, customers can now perform validation in actual systems running the latest PCIe and memory technologies on S2C’s Prodigy Logic Matrix LX2 prototyping System.

Avery PCIe Speed Adapter

This adapter enables running FPGA prototypes on real system platforms running PCIe host interfaces at native speeds. PCIe speed adaptor implements internal buffering in order to handle the native speeds.

 

Highlights

  • Connect SoC prototype PCIe Endpoint (EP) to a full speed PCIe Root Complex (RC)/host server platform slot
  • Configure RC and EP configurations independently
    • EP interface compliant with PCIe Gen3 thru Gen6
    • RC interface compliant with native PC host
    • Ex: (EP 16x, PIPE 64bit, Gen 4.0) to (RC 4x, PIPE 32bit, Gen 3.0)
  • Multiple lane widths – of x4, x8, and x16
  • Supports multiple PIPE Data widths and PIPE rates
  • Original mode, SERDES architecture, Low pin count interfaces
  • Frequency scaling factor of emulated device down to 1/64
  • Power management state of L0
  • Physical layer initialization, including equalization

Avery Memory Speed Adapter

This adapter enables running FPGA prototype when native memory speed of operation or memory types are not achievable by FPGA prototype systems. For example, an AI SoC may be using HBM2E or HBM3 or GDDR6 memory interfaces. An IoT or mobile device SoC may be using an LPDDR5 interface. The adaptor connects to the SoC being validated, at the DFI interface.

 

Highlights

  • Supports HBM2E, HBM3, DDR4, LPDDR4, LPDDR5 DFI 5.0 interfaces to SoC (DUT)
  • Frequency ratioing of 1:1, 1:2, and 1:4
  • Debugging log file through UART interface controlled by MCU
  • Supports Xilinx FPGAs and leverages low cost DDR4 daughter card memory
  • Includes simulation, synthesis and timing scripts

Partnered Solution

The partnering between Avery and S2C has enabled a capability that was not possible before. In order to continue to meet their customers’ innovative SoC development needs, Avery has developed speed adaptors for PCIe and memory interfaces. The S2C prototyping platform contains multiple FPGAs. The Avery speed adapter IP get synthesized into the FPGA that is connected to the host through a PCIe or the FPGA that contains the memory controller. The partnered solution broadens and upgrades the speed interfaces that can be supported by the FPGA prototyping platforms. Customers looking for latest interfaces would benefit from the Avery-S2C partnered solution. The solution can support PCIe Gen4, 5, 6 and memory interfaces such HBM2E and HBM3 as well as low power DDR interfaces such as LPDDR5.

 

You can access the Avery Design Systems’ press release on speed adapters here.

For more information about S2C’s prototyping solutions, visit www.s2ceda.com.

For more information about Avery’s speed adaptors, visit www.avery-design.com.

Also Read:

S2C EDA Delivers on Plan to Scale-Up FPGA Prototyping Platforms to Billions of Gates

Successful SoC Debug with FPGA Prototyping – It’s Really All About Planning and Good Judgement

S2C FPGA Prototyping solutions help accelerate 3D visual AI chip


DAC 2021 – Accellera Panel all about Functional Safety Standards

DAC 2021 – Accellera Panel all about Functional Safety Standards
by Daniel Payne on 12-14-2021 at 10:00 am

FS data format min

Functional safety has been at the forefront of the electrification of our vehicles with new ADAS features, and the push to reach autonomous driving, while having compliance with the ISO 26262 functional safety standard. I attended the Accellera hosted panel discussion on Monday at DAC, hearing from functional safety panelists that work at AMD, Arm, Texas Instruments, and DARPA. Alessandra Nardi, Accellera Functional Safety Working Group Chair moderated the panel discussion, and started out with a big picture overview first.

The Functional Safety Working Group (FS WG) started out as a proposed working group back in October 2019, and soon had a kickoff in December 2019. By February 2020 the working group was officially formed from member company representatives, and about 30 companies are now working on the standard. You can expect a Data Model definition white paper to come out in Q1 2022, and a Draft Language (LRM) release by Q2 2022.

The big idea is to exchange the same FS data across different various automation tools, where there’s a connection between the FS data and the design info. So Accellera will define a data format/language.

Today there are many Functional Safety standards, like ISO 26262, and for industries like: Medical, Industrial, Aviation, Railways, Machinery. The Accellera FS WG will create a data standard, then collaborate with the IEEE as the standard becomes more mature, and they publish IEEE P2851.

AMD – Alexandre Palus

Functional Safety (FuSa) is concerned with both Systematic Failures and Random Failures. For vehicles, the Automotive Safety Integrity Level (ASIL) has defined four classes from A through D, where a failure in ASIL-D results in human death, and ASIL-B failures cause human injury.

Design for safety costs more on a project up front, but in the long run is less costly, based on experience, because there are fewer mask spins to correct safety issues. Teams really need to have a FuSa architect, and not treat FuSa as an after thought in order to be successful. If everyone does their job right, then nobody ends up in court.

Arm – Ghani Kanawati

Does FuSa add more complexity to development of a product? Yes, it’s a new requirement, but we need to convince design managers that FuSa is simple to implement.  There may be conflicts during development between security and safety goals, and each project is unique in requirements, but safety concerns need to come into the project at the very beginning.

Having a lifecycle process in place ensures a higher chance of success when designing for FuSa. Here’s a typical lifecycle process for Soft IP development at Arm:

The FuSa design process does add more steps, and they are well understood, and sure, there’s a learning curve. If a random fault happens in your hardware design, then what happens to the system behavior? Does your system respond safely?

Texas Instruments – Bharat Rajaram

There are over a dozen FS standards spanning many inter-related industries, so ISO 26262 is the gold standard for semiconductor companies,  and at TI they’ve been designing for safety since the 1980s. Automotive examples that require FuSa include:

  • ASIL A – Rear lights, Vision ADAS
  • ASIL B – Instrument cluster, headlights, rear-view cameras
  • ASIL B&C – Active suspension
  • ASIL C – Radar cruise control
  • ASIL C&D – Engine management
  • ASIL D – Antilock braking, electric power steering, airbag

Industrial systems have their own FuSa levels: SIL 1, SIL 2, SIL 3. Safety engineers are also concerned about Failure In Time (FIT) rates, where there are standards to follow like, IEC TR 62380, SN 29500, JESD85.

DARPA – Serge Leef

In Serge’s group they are mostly focused on security, not safety, but the two are related, because security breaches can cause safety failures. At DARPA one big concern is how to secure chips in the supply chain from gray market devices and hacking attempts. An electronic system has an attack surface reference model, and security exploits happen in four categories:

  • Side Channel
  • Reverse Engineering
  • Supply Chain
  • Malicious Hardware

The DARPA proposal is to harden new chips with an on-chip security engine:

Both Synopsys and Arm are working on the specification of this on-chip security engine, so stay tuned for more details.

Serge was skeptical of the FuSa concepts, based on experience working in EDA, because companies couldn’t explain clearly enough what compliance to the ISO 26262 standard meant for EDA tools that are used to develop IP. In government circles they talk about Quantifiable Assurance, which is FuSa for defense systems.

For most digital systems, with only 5-10% of the state space even being simulated, can you really assure that the chips will operate safely under all conditions?

Panel Discussion

Q: Are we going beyond 5% coverage of the digital state space?

A: Alessandra – optimistic about progress of continued FuSa best practices across many industries, and safety is becoming more mainstream now.

A: Alex – we started asking vendors back in 2002 and 2003 for FuSa, but they weren’t offering much. There’s been gradual improvements from Arm and EDA vendors, and we feel that cores are safe, and most IP is becoming safer. Most EDA and IP vendors have checkboxes for safety compliance. Yes, security is unsolved at the moment.

A: Bharat – Do you know how the aileron controller operate safely in a jet? They have 3 out of 5 microcontrollers vote. Even the S-Class Mercedes comes with about 200 controllers, and they have to meet FuSa standards to get into the car ecosystem. In the early days air gap approaches stopped hacking. By 2030 about 90% of cars will have connectivity, which also provides a huge attack surface for criminals.

A: Ghani – FuSa is really a tier of things to consider, while automotive vendors often are only willing to spend quite frugally. The ISO 26262 standard is about 300 pages in length now.

A: Serge – Hackers have breached cars with WiFi, and also through entertainment and CAN bus attacks, but where is the economic gain?

Q: How do you separate security from safety standards?

A; Ghani – yes, we’ve seen hacking in cars, so security and safety are always inter related.

A Alex – the security of SW in our cars is quite a recent development. Hackers could even place a Trojan into an auto company during development, and then trigger a ransom attack.

A: Serge – with the market push for autonomous vehicles, the security aspects are quite high.

Also Read:

Accellera Unveils PSS 2.0 – Production Ready

Functional Safety – What and How

An Accellera Update. COVID Accelerates Progress


Intel Discusses Scaling Innovations at IEDM

Intel Discusses Scaling Innovations at IEDM
by Scotten Jones on 12-14-2021 at 6:00 am

Intel at IEDM Slides Page 1

Standard Cell Scaling

Complex logic designs are built up from standard cells, in order to continue to scale logic we need to continually shrink the size of standard cells.

Figure 1 illustrates the dimensions of a standard cell.

 Figure 1. Standard Cell Dimensions.

 From figure 1 we can see that shrinking standard cell sizes requires shrinking the cell height, or the cell width, or both. The height of the cell is Metal 2 Pitch (M2P) multiplied by the number of Tracks. The cell width is determined by the Contacted Poly Pitch (CPP) and whether the cell has single or double diffusion breaks.

Shrinking cell height and width impacts the underlying device structures. In figure 1 on the right side is a simple cross section of the fins that must fit in the cell and at the bottom of the figure is a simple cross section of the elements that make up CPP. The two Intel papers I want to discuss in this write-up are the 3-D CMOS paper that can enable reduced cell height and the 2D Monolayer CMOS paper that can enable reduced cell width.

3-D CMOS (CFET)

Figure 2 illustrates the FinFET device dimensions that must fit into the cell height.

Figure 2. Cell Height Scaling.

From figure 2 we can see that the cell height includes two cell boundaries, some number of fin pitches (depends on number of fins) and the n-p spacing between the nFET and pFET fins.

Figure 3 illustrate that once a transition is made to horizonal nanosheets the n-p spacing can be reduced by various options.

Figure 3. n-p Spacing.

On the left side of figure 3 is a standard horizontal nanosheet that needs the same n-p spacing we would see with a FinFET. This type of configuration supports a 6-track cell height or with the addition of buried power rails (BPR), a 5-track cell (BPR reduce the cell boundary width). The middle of the figure illustrates adding a dielectric wall in between the nFET and pFET to reduce n-p spacing and enable track heights of 4.33 to 4.00. Finally on the right side of the figure a 3-D CMOS device (CFET), is illustrated and the n-p spacing is now zero in the lateral dimension because the FETs are stacked, this approach can support tracks heights of 4.00 to 3.00.

In their “Opportunities in 3-D stacked CMOS transistors” paper at IEDM, Intel provided an overview of 3D-CMOS.

The basic idea behind 3D-CMOS is illustrated on the left side of figure 4.

Figure 4. 3D-CMOS.

There are two main options for 3D-CMOS.

In a sequential approach the bottom device layer is fabricated up through gates and contacts on one wafer, a top layer of devices is separately fabricated on a second wafer and then deposited onto the bottom devices through layer transfer or bonding, followed by interconnect for the resulting two-layer structure. The sequential approach is illustrated in the top right-hand side of figure 4.

The sequential approach requires extra processing because the bottom and top layers are fabricated independently, but it offers the ability to mix and match various materials, for example Germanium PMOS devices with Silicon NMOS devices or even introducing Gallium Nitride devices. It does require combining the two device layers without degrading either layer and a critical bonding step.

The second approach is the self-aligned approach, where both the bottom and top layers are fabricated on the same wafer. This approach can in theory reduce the process complexity but does present integration challenges to achieving good device performance for both layers. The self-aligned approach is illustrated in the bottom right-hand side of figure 4.

3D-CMOS is a promising solution to continue scaling after horizontal nanosheets enter production.

 2D Monolayer CMOS

If we look at the CPP cross section at the bottom of figure 1 in more detail, we get the diagram on the left side of figure 5. CPP is made up of Gate Length (Lg), contact width and twice the contact to gate spacer thickness.

Figure 5. Contacted Poly Pitch Scaling.

As we can see from the table on the right side of figure 5, TSMC for example, has been shrinking CPP by reducing all three dimensions.

With respect to Lg it is limited by the type of device being used. The more constrained the channel is and more gates that are used to control the channel, the shorter the minimum gate length can be. Figure 6 presents the limits for different device types and presents the minimum gate length versus channel thickness and number of gates for silicon.

Figure 6. Gate Length Scaling.

From the bottom left part of figure 6 we can see that for a planar transistor with a single gate, the channel is poorly controlled, and the gate length limit is around 30nm (theoretically it is less, but all the logic manufacturers moved off single gate planar devices by 30nm). Moving to a planar device with a thin channel and two gates as is seen with FDSOI, reduces the minimum channel length to approximately 23nm. FinFETs with the channel constrained to a thin fin and three gates enables gate lengths down to approximately 16nm, this is one of the reasons FinFETs have been adopted as the logic mainstream. As we move forward horizontal nanowire/nanosheets with four gates offer minimum gate lengths of approximately 13nm. Finally, beyond nanosheets, the Intel work discussed here addresses 2D devices that can enable channels of less than 10nm providing further CPP scaling. This is a promising next step beyond 3D-CMOS or may be integrated with 3D-CMOS. There were many papers presented at the conference representing a variety of companies and research groups illustrating the interest in this technology.

The Intel paper discussing this is entitled: “Advancing 2D Monolayer CMOS Through Contact, Channel and Interface Engineering”.

As silicon is scaled down, the channel thickness must get thinner and mobility degradation eventually occurs, the silicon limit for good mobility is approximately 5nm. Transition Metal Dichalcogenide (TMD) materials show similar mobility in monolayer films ~1nm to their bulk mobility making them attractive candidates for 2D devices. TMD films will have lower mobility and higher contact resistance than current generation silicon CMOS devices, but simulations indicate that even with these draw backs, stacking enough 2D layers will provide a performance and scaling improvement over silicon horizontal nanosheets, for example with if 2D layers are stacked 6 high with a Lg of 5nm and metallic contacts, significant scaling, power, and performance improvements can be achieved.

Figure 7 illustrates 2D Devices.

Figure 7. 2D Devices.

The paper reviews three key areas: channel material quality, contact resistance, and gate stack quality.

The best channel results in literature are from deposition techniques that haven’t been demonstrated on 300mm wafers. MOCVD and nucleated CVD on pre patterned seeds have the potential for 300mm deposition. MOCVD offers the prospect of a wide temperature range including 300oC deposition that could open up TMD channel deposition compatible with the Back End of Line (BEOL). Nucleated CVD offers grain boundary free devices and Intel has achieved the best published WS2 mobility.

Low contact resistance contacts to NMOS and PMOS remain a challenge for 2D FETs. The authors show promising results for Sb on MoSi2, and Sb offers a higher melting point than Bi (the other leading NMOS contact material). PMOS contacts remain far more challenging, once again the authors showed some results with Ru but there are still a lot of challenges.

The 2D materials of interest here are well known to collect organic processing residues that can inhibit ALD deposition of gate oxides. The authors compared a vacuum anneal and a forming gas anneal, and both reduced the carbon contamination levels, the forming gas anneal was shown to improve measured electrical performance for a MOSCAP.

By comparing the work done here with previously published work the authors have shown where 2D devices currently stand and introduced promising new contact materials and deposition techniques.

2D devices are far from being ready for manufacturing but several groups are pursuing them, and steady progress is being made.

Conclusion

Samsung is currently trying to be the first in the industry to put horizontal nanosheets (HNS) into production. Intel and TSMC are also working on HNS. It is likely that HNS will carry the industry at least through 2025. By around 2028 3D-CMOS (called CFETs by others) may be ready for production incorporating vertical stacks of n and p nanosheets. As a follow on to 3D-CMOS or even and extension of 3D-CMOS 2D devices are a potential path for continued scaling past the end of the decade. Intel is clearly trying to reassert themselves as a semiconductor technology leader.

Also Read:

SISPAD – Cost Simulations to Enable PPAC Aware Technology Development

TSMC Arizona Fab Cost Revisited

Intel Accelerated


DAC 2021 – Joe Sawicki explains Digitalization

DAC 2021 – Joe Sawicki explains Digitalization
by Daniel Payne on 12-13-2021 at 10:00 am

semiconductor content min

Monday at DAC this year started off on a very optimistic note as Joe Sawicki from Siemens EDA presented in the Pavilion on the topic of Digitalization, a frequent theme in the popular press because of the whole Work From Home transition that we’ve gone through during the pandemic. Several industries are benefiting from the digitalization trend: semiconductor, aerospace, defense, automotive, heavy machinery, medical, consumer products, energy, utilities and even marine.

The ever-present Tesla leads the way in EV sales as owners enjoy the benefits of SW updates over the air, adding new features and fixing bugs, much like our smart phone apps, all enabled by semiconductors. Even President Biden extols the virtues of semiconductor production in the US, and how national policy should benefit the semiconductor industry.

During the pandemic we’ve experienced trauma because of illness and death, yet the move to digital commerce has sharply risen, and cloud-based services are flourishing, like Zoom and Microsoft Teams. Most modern businesses are quickly moving their services and support to digital and cloud platforms.

Semiconductor content in electronic products has moved from 16% up to 25% as a mega-trend, think: Amazon, Google, Tesla, Bosch, ZTE, Huawei, Apple and Facebook.

Foundry revenue trends are showing that systems companies are growing at a 26.8% CAGR, which is a big shift in who does SoC designs. Some of the drivers at foundries include: Sensors, edge computing, 5G, wireless, cloud and data center. Just in the past 5 years there has been a 5X increase in the number of sensors connected to the Internet, projected to reach 29.6 billion devices by 2025. An example of connected sensors is the Ring Doorbell.

There’s growth in how 5G and IoT markets are linked, and even data centers are forecasted to have a 14% CAGR during the 2028-2030 period. The flow of information is from sensors, to edge processing, to 5G or wireless, and it is all ending up in a data center.

Within the next 10 years the projection is that 95% of our vehicles will be connected, which creates big demands on the wireless infrastructure. ADAS is expected to have a 22% CAGR from 2020 to 2030. The value of Electronic Control Units (ECUs) in a car went from $302 in 2010, to $499 in 2020, and is expected to reach $758 by 2030. The electrification trend in automotive is in growth mode, so one challenge is to hire enough system designers to keep pace with competitive demands.

AI in semiconductor has a steep growth of some 31% CAGR to 2030, and it’s becoming a pervasive technology. We now have chips optimized for AI training and inferencing. Semiconductor companies are starting new domain-specific AI chips in diverse areas like: voice, video, cyber security, IoT, odor detection,  etc.

The overall semiconductor revenues are predicted to grow at a 9.5% CAGR over 2020-2025, while the GDP CAGR is a bit lower at 6.2%, so let the good times continue to roll. Even the US government is considering legislation to fund research in the semiconductor industry to help gain critical market share. Even VC funding for fabless companies reached some $8B in 2021, setting another world record year. Foundry investments are expected to reach $79.6B in 2021.

Historically the R&D funding as a percentage of revenue for semiconductor companies has been about 14.2%, while it has ranged from 12% to 18% over the years. During the pandemic our industry has seen shortages in the supply chain, as demand is strong, however the basic materials can be scarce. The shortage is slowing down our consumer and industrial productivity, but how much double ordering is really going on right now?

The majority of VC investing in semiconductor has focused on China markets recently, at 53%.

In our EDA world the revenues are driven by new design starts, not semiconductor revenues, so just how many new designs will start using the 5nm node? We’ve seen failed predictions in the past, like some who said that only 3 companies would even attempt a 5nm design. In reality, IC design starts are quite strong in many areas: wearables, IoT, ADAS, industrial, smart grid, 5G. In Q2 of 2021 the EDA revenues were at $3B, a growth of 14.6%, so that’s a healthy increase, and EDA has seen four quarters now of double-digit growth. In fact, EDA revenues have recently seen their highest growth rate in the past 10 years.

Diving a bit deeper, the following areas are driving EDA growth:

  • AMS, RF – 12.4%
  • DFT – 9.7%
  • IC Full Custom – 8.4%
  • Formal – 8.4%
  • Logic – 7.4%
  • Layout Verification – 7.4%
  • P&R – 7%
  • PCB – 10%

Challenges going forward into the next decade include three areas: Technology scaling, Design scaling, System scaling. Looking at Apple as an example of a systems company designing their own A-series of processors over the past 8 years, where the A7 processor had about 1 billion transistors, while the latest A15 processor includes some 15 billion transistors, almost in perfect alignment with a 16X increase predicted by Moore’s Law.

Machine learning as a technique is being applied to EDA tools to enable accelerations in yield ramp, pattern analytics, and metrology. Predicted costs for design scaling are that 3nm SoC costs could range from $535M – $626M. In EDA the use of High Level Synthesis (HLS) continues, and Siemens has offered HLS for about 25 years now. With HLS systems companies like NVIDIA are designing new chips with a small group of just 10 engineers in just 6 months. As AI techniques become more ubiquitous, we can expect to see even more accelerator products announced that are application specific, and during the design phase the engineers will explore to find the optimum architecture.

Another new acronym is STCO, short for System Technology Co-Optimization, where multiple die and chiplets are being used to assemble new systems, along with the use of 3D die stacking, and other advanced packaging concepts.

For system scaling the path forward calls for mixed-mode, virtual simulations, along with hardware-assisted verification techniques like using emulation, where a systems designer can actually run real apps even before the silicon has been manufactured. Emulation will also be used more for the debug of HW/SW integration issues early in the design process.

Within Siemens there’s something called PAVE360, that allows a systems approach to model, sense, compute, analyze and actuate as a model, prior to implementation. In the pursuit of autonomous vehicles, using a PAVE 360 methodology is more practical than driving billions of actual miles to uncover safety issues.

Summary

Mr. Sawicki was quite upbeat about the semiconductor industry trends, and the number of IC design starts is great news for EDA vendors of all types, although there are formidable challenges ahead to meet the scaling demands.

Also Read:

System Technology Co-Optimization (STCO)

Siemens EDA will be returning to DAC this year as a Platinum Sponsor.

Machine Learning Applied to IP Validation, Running on AWS Graviton2


A Practical Approach to Better Thermal Analysis for Chip and Package

A Practical Approach to Better Thermal Analysis for Chip and Package
by Daniel Nenni on 12-13-2021 at 6:00 am

ANSYS Thermal Chip Model

Thermal modeling has become a hot topic for designers of today’s high-speed circuits and complex packages. This has led to the adoption of better and more sophisticated thermal modeling tools and flows as exemplified in this presentation by Micron at the IDEAS Digital Forum. The presentation is titled “Thermal Aware Memory Controller Design with Chip Package System Simulation” and covers the latest developments in both power modeling and thermal modeling by the Controller design team at Micron.

The first presenter is Shiva Shankar Padakanti, a senior physical design manager at Micron with over 17 years of experience in backend design and more than 33 tape-outs down to 7nm. Shiva introduces the two major thermal issues faced by his team: (a.) avoiding overly pessimistic thermal limits that degrade a chip’s performance, and (b.) avoiding thermal runaway – a reliability issue where local hotspots cause increased device leakage, which increases the temperature yet further.

Shiva sets the stage by discussing their traditional thermal analysis flow that assumed a uniform temperature across the entire chip based on total power and relied on simple power/temperature limits with a large safety margin. This constrained power signoff to use un-realistically pessimistic temperature limits because the analysis under-reported the true maximum temperature. This could lead to compromise in the design’s specification and significant loss in chip performance due to over-design. The first attempt to improve their analysis capability was to analyze the power on a block-by-block basis instead of full-chip. This gave a more realistic non-uniform temperature distribution but was still unable to account for temperature-dependent leakage power.

Working with Ansys, Micron developed a new analysis flow that uses the Chip Thermal Model (CTM) technology augmented with the APL Leakage Model. A CTM cuts each layer in the chip into a fine grid and then describes the power output of each grid square as a function of the temperature. The APL Leakage files capture how device leakage varies with temperature. These models are generated by Ansys RedHawk™ or Ansys Totem™ power integrity signoff and gives a much more accurate and fine-grained power model. This was then handed off to the Thermal team to enable their package and system thermal analysis.

Fig.1 Thermal analysis flow using Chip Thermal Models (CTM) generated by Ansys RedHawk or Ansys Totem power integrity signoff tools, and then used for package and system thermal analysis by Ansys Icepak.

The advantage of the CTM technology is that it accurately predicts the location of thermal hotpots and, in this test case, predicted a temperature 12% higher than the simpler block-based approach (see Fig.2). This higher temperature results from the accurate modeling of temperature-dependent leakage which was not considered in the block-based or traditional flows.

Fig.2 Shows a comparison of the temperature profile using the simpler block-based thermal modeling approach against the more accurate Chip Thermal Model that relies on a per-layer gridded model. The CTM technology accurately identifies the hotspot locations and predicts a 12% higher temperature based on temperature-dependent leakage

The second part of the presentation is narrated by Ravi Kumar, senior principal engineer at Micron with over 9 years’ experience  in thermal management of electronics. Ravi starts by pointing out that chip, package, and system analyses are each at a different scale – from microns to centimeters and thus require a range of simulation technologies to span this range. Also, simulating a complete stack as shown in Fig.3 is very computationally expensive for each temperature point, often limiting the scope of thermal analysis.

Fig.3 Cross section of the complete chip-package-system stack for the Micron controller under thermal analysis, including the PCB substrate and the external heat sink. The cooling airflow over the heatsink is modeled by Icepak using Ansys’ computational fluid dynamics technology.

However, by using the CTM modeling approach, Ravi’s team was able to speed up the thermal simulation time by 90% due to the higher efficiency and faster convergence of the CTM approach. The final operating temperature depends, of course, on its power output. But the power output is also temperature dependent. Icepak executes internal iterations using the CTM to arrive at a stable operating temperature. In this test case, the heat sink was designed to radiate an estimated 50W, but the system actually ended up generating closer to 60W. Failure to anticipate the real heat flow can heat stress the package and impact the performance and reliability of the entire system.

A final benefit highlighted by the Micron team was the ability to optimize the placement of thermal sensors on the chip. The traditional techniques had not accurately placed the sensors at the true maximum hotspots and under-measured the hotspot temperature by 8.1°C. The new CTM-based approach optimized their placement and reduced the risk of thermal runaway.

Shiva concluded the presentation by outlining future projects by his team to consider thermal-aware electromigration analysis, and the mechanical warpage of package and PCB due to thermal gradients.

You can view the entire Micron presentation on-demand at Ansys IDEAS Digital Forum under the Electrothermal Analysis track. Registration is free.

Also Read

Ansys CEO Ajei Gopal’s Keynote on 3D-IC at Samsung SAFE Forum

Ansys to Present Multiphysics Cloud Enablement with Microsoft Azure at DAC

Big Data Helps Boost PDN Sign Off Coverage


Edge Computing Paradigm

Edge Computing Paradigm
by Ahmed Banafa on 12-12-2021 at 6:00 am

Edge Computing Paradigm

Edge computing is a model in which data, processing and applications are concentrated in devices at the network rather than existing almost entirely in the cloud.

Edge Computing is a paradigm that extends Cloud Computing and services to the of the network, similar to Cloud, Edge provides data, compute, storage, and application services to end-users.

Edge Computing reduces service latency, and improves QoS (Quality of Service), resulting in superior user-experience. #Edge Computing supports emerging concept of Metaverse applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Edge Computing paradigm is well positioned for real time Big Data and real time analytics, it supports densely distributed data collection points, hence adding a fourth axis to the often-mentioned Big Data dimensions (volume, variety, and velocity).

Unlike traditional data centers, Edge devices are geographically distributed over heterogeneous platforms, spanning multiple management domains. That means data can be processed locally in smart devices rather than being sent to the cloud for processing.

Edge Computing Services cover:

  • Applications that require very low and predictable latency.
  • Geographically distributed applications
  • Fast mobile applications
  • Large-scale distributed control systems

Advantages of Edge computing

  • Bringing data close to the user. Instead of housing information at data center sites far from the end-point, the Edge aims to place the data close to the end-user.
  • Creating dense geographical distribution. First of all, big data and analytics can be done faster with better results. Second, administrators are able to support location-based mobility demands and not have to traverse the entire network. Third, these (Edge) systems would be created in such a way that real-time data analytics become a reality on a truly massive scale.
  • True support for mobility and the Metaverse. By controlling data at various points, Edge computing integrates core cloud services with those of a truly distributed data center platform. As more services are created to benefit the end-user, and Edge networks will become more prevalent.
  • Numerous verticals are ready to adopt. Many organizations are already adopting the concept of the Edge. Many different types of services aim to deliver rich content to the end-user. This spans IT shops, vendors, and entertainment companies as well.
  • Seamless integration with the cloud and other services. With Edge services, we’re able to enhance the cloud experience by isolating user data that needs to live on the Edge. From there, administrators are able to tie-in analytics, security, or other services directly into their cloud model.

Benefits of Edge Computing

  • Minimize latency
  • Conserve network bandwidth
  • Address security concerns at all level of the network
  • Operate reliably with quick decisions
  • Collect and secure wide range of data
  • Move data to the best place for processing
  • Lower expenses of using high computing power only when needed and less bandwidth
  • Better analysis and insights of local data

Real-Life Example:

A traffic light system in a major city is equipped with smart sensors. It is the day after the local team won a championship game and it’s the morning of the day of the big parade. A surge of traffic into the city is expected as revelers come to celebrate their team’s win. As the traffic builds, data are collected from individual traffic lights. The application developed by the city to adjust light patterns and timing is running on each edge device. The app automatically makes adjustments to light patterns in real time, at the edge, working around traffic impediments as they arise and diminish. Traffic delays are kept to a minimum, and fans spend less time in their cars and have more time to enjoy their big day.

After the parade is over, all the data collected from the traffic light system would be sent up to the cloud and analyzed, supporting predictive analysis and allowing the city to adjust and improve its traffic application’s response to future traffic anomalies. There is little value in sending a live steady stream of everyday traffic sensor data to the cloud for storage and analysis. The civic engineers have a good handle on normal traffic patterns. The relevant data is sensor information that diverges from the norm, such as the data from parade day.

Future of Edge Computing

As more services, data and applications are pushed to the end-user, technologists will need to find ways to optimize the delivery process. This means bringing information closer to the end-user, reducing latency and being prepared for the Metaverse and its applications in Web 3.0. More users are utilizing mobility as their means to conduct business and their personal lives. Rich content and lots of data points are pushing cloud computing platforms, literally, to the Edge – where the user’s requirements are continuing to grow.

With the increase in data and cloud services utilization, Edge Computing will play a key role in helping reduce latency and improving the user experience. We are now truly distributing the data plane and pushing advanced services to the Edge. By doing so, administrators are able to bring rich content to the user faster, more efficiently, and – very importantly – more economically. This, ultimately, will mean better data access, improved corporate analytics capabilities, and an overall improvement in the end-user computing experience.

Moving the intelligent processing of data to the edge only raises the stakes for maintaining the availability of these smart gateways and their communication path to the cloud. When the Internet of Things (IoT) provides methods that allow people to manage their daily lives, from locking their homes to checking their schedules to cooking their meals, gateway downtime in the Edge Computing world becomes a critical issue. Additionally, resilience and failover solutions that safeguard those processes will become even more essential. Generally speaking, we are moving towards localization to distributed model away from the current strained centralized system defining the Internet infrastructure.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: Prof. Banafa website

References  

https://www.linkedin.com/pulse/why-iot-needs-fog-computing-ahmed-banafa/

https://www.linkedin.com/pulse/fog-computing-vital-successful-internet-things-iot-ahmed-banafa/

http://www.cisco.com/web/about/ac50/ac207/crc_new/university/RFP/rfp13078.html

http://www.howtogeek.com/185876/what-is-Edge-computing/

http://newsroom.cisco.com/feature-content?type=webcontent&articleId=1365576