Bronco Webinar 800x100 1

Arm Announces Neoverse Update, Immediately Following V9

Arm Announces Neoverse Update, Immediately Following V9
by Bernard Murphy on 05-13-2021 at 6:00 am

Arm Neoverse Update

Among marketing principles, “Stay Visible’ must rank as one of the highest. Meaning that if you don’t have something new to announce on a regular basis, you disappear. Most important, among the people you hope to influence, you cease to exist. As true for small ventures as large, though small ventures struggle to understand or prioritize the importance of visibility. Major tech operations (such as Arm and NVIDIA) don’t make this mistake. They’ll have one or more annual big announcements, followed by regular progress updates through the rest of the year. This Arm Neoverse update makes sure they stay highly visible as innovators and thought leaders.

2021 updates

I wrote recently about the Arm v9 announcement, perhaps not a blockbuster but lots of new goodies. They have quickly followed with more of a blockbuster update on their Neoverse series. The product family that aims to address everything infrastructure-related, from cloud servers to communication backbones to the edge. Last year, Chris Bergey, Sr. VP and GM of Infrastructure detailed the product plan. V-series for maximum performance, N-series for scale-out performance and E-series for maximum throughput. This year he announced V-series and N-series updates along with a new and improved CMN (coherent mesh network).

V1 is the first introduction in the V-series and offers a 50% performance uplift (over plan I assume). It also supports 2x256b SVE (scalable vector extensions) and bfloat16. Per Google, bfloat16 is ideal in cloud applications, especially in TPUs. Take from that what you will.

N2 is the second release in the N-series, with a 40% performance uplift, also with SVE but here 2x128b, and again bfloat16.

CMN only rated one slide in the briefing but I know it will be integral to server architectures. Indeed any regular arrayed structure, though multi-core server chips are the most obvious application. The new CMN-700 supports more cores, caches, crosspoint nodes, memory ports and CCIX ports per die (they also support CXL).

Hyperscalar and supercomputing growth

Some big announcements here, starting with another leading hyperscalar adoption from Tencent (they’re already announced in support of cloud gaming). And Neoverse is coming soon to Oracle Cloud, here delivered by Ampere Altra processors. Good to see that Ampere is finding serious traction. The hyperscalar oligopoly needs external competition.

AWS Graviton2 is already available as an EC2 instance. AnandTech ran an analysis last year with comparison to the latest (and then not yet released) Intel and AMD processors. The comparison is clouded in mysteries of how AWS rate their EC2 instances, so difficult to draw black and white conclusions. But it’s telling that the Arm-based server is now being compared directly with top-of the line servers. And AWS shows that growth in EC2 instances is now dominated by Graviton instances.

Alibaba have tested their own Arm-based cloud instances, announcing a significant performance boost in a SPECjbb benchmark and a jump in their DragonWell Open Java development kit on N2 by 50%. I think I see a theme here. If you’re big in cloud, you’re building (or buying) Arm-based instances

Also the Ministry of Electronics and Information Technology (MeitY) in India is driving an exascale project under their center for development of advanced computing (C-DAC). This will leverage French SiPearl Rhea for servers (72 V1 cores, HBM2 and DDR5 memory, in TSMC 6nm). And South Korean ETRI K-AB21 (based on Arm Zeus, an earlier name for one of the Neoverse cores) for high performance and low power inference.

5G growth

Marvell has launched their OCTEON family addressing 5G RAN applications, with applications in remote radio units, distributed units and central units. Also to Smart NIC cards. All building on N2 cores

At the edge, Arm has been collaborating with Vodaphone on uCPE (universal customer premises equipment) which is – and I quote – “a general-purpose platform that integrates compute, storage and networking on a commodity, off-the-shelf server. This allows it to provide network services (such as SD WANfirewall etc.) as virtual functions to any site on a network. uCPE is the equivalent of a ‘Cloud for network services’, but at the customer site.” Reducing total cost of ownership for the customer and reducing their carbon footprint.

Lots of good progress. You can read the release HERE.

 


Formal Verification Approach Continues to Grow

Formal Verification Approach Continues to Grow
by Daniel Payne on 05-12-2021 at 10:00 am

formal history min

After a few decades of watching formal verification techniques being applied to SoC designs, it  certainly continues to be a growth market for EDA vendors. In the first decades from 1970-1990 the earliest forms of formal tools emerged at technical conferences, typically written by University students earning their Ph.D.s, and the users had to be at the Ph.D. level to understand how best to use the limited tools and interpret the results for theorem proving. In the next twenty years, 1990-2010 we saw formal property checking emerge, and you still had to manually write the properties and have the domain experience in formal technology. Thankfully, since 2010 to present the formal tool user is a verification engineer that is using a variety of automated formal apps. Now that’s progress.

Siemens/Infineon had their own internally developed formal apps and decided to spin-out a separate company in 2005 called OneSpin, and they’ve been a strong #3 player in the formal market for a number of years now. Just last month Siemens EDA announced the acquisition of OneSpin, adding to what the Questa product family has been offering, so now the combination places Siemens EDA as the #2 vendor in formal apps. It’s kind of rare to find a successful EDA spin-out that was later acquired.

I’ve been following OneSpin ever since I passed by their booth at DAC one year and McKenzie Ross literally pulled me into their booth for an update, maybe it was the DAC badge that said Press or Blogger. Brett Cline is another OneSpin person that I’ve followed ever since his days at Summit Design in 1998. You can follow OneSpin on Twitter, where they are quite active and relevant.

With formal you hear about Assertion Based Verification (ABV) and apps for specific tasks like Sequential Logic Equivalence Checking (SLEC), Clock Domain Crossing (CDC), etc. Mentor acquired formal vendor 0-In back in 2004, and continued to grow the product family over the years. Now Siemens EDA has over a dozen formal apps to choose from, more choice is always better.

Verification engineers have been quick to adopt new tools and methodologies in order to get their job done on increasingly larger designs, with huge state spaces, however the traditional functional simulation techniques are not sufficient to reach verification goals. Adoption of formal methods and apps have enabled verification engineers to complete their tasks more quickly.

Acquisition Benefits

Two combined engineering teams instead of competing engineering teams makes sense, to grow the market for formal tools. With a bigger scale comes some benefits of better ideas, and knowing what was already tried before.

The technology combination of OneSpin and Questa Formal will offer users even more ways to automate their verification, and I’ll be interested to learn what the product roadmap looks like.

Customers of both Questa Formal and OneSpin should be happy, knowing that their vendor is investing even more resources in this critical area.

Growing a point tool company into a successful business, then getting acquired by a larger EDA company is a good sign, because it means that the formal product segment is healthy, so expect to see continued news of formal tools being adopted across the hot industries, like: 5G, AI, automotive, IoT, HPC and mission critical fields of aerospace and defense.

Summary

There’s always the classic decision of growing your own product line through internal software development, or acquiring a smaller and successful company. I’d say that Siemens EDA made another savvy decision to acquire OneSpin in order to accelerate into the formal market. My favorite list of past acquisitions shows that Siemens EDA has the history of making these deals work out: Berkeley DA, Tanner EDA, 0-In, Model Technology.

Related Blogs


3rd Party Semiconductor Intellectual Property Market Update

3rd Party Semiconductor Intellectual Property Market Update
by Richard Wawrzyniak on 05-12-2021 at 6:00 am

IP Market

The 3rd Party Semiconductor Intellectual Property (IP) market has seen great innovation in the products it offers to System-on-a-Chip (SoC) designers over the last ten years. If any market segment in the semiconductor industry typifies the intense evolutionary pressures that the entire electronics market has undergone, it is the 3rd Party IP market.

Most of these evolutionary forces are driven by the need to integrate more functionality in fewer devices at the system level. The primary method to accomplish this is using 3rd Party IP. The IP market has evolved to supply the solutions SoC designers require to craft their silicon products in response to ever-changing market requirements.

Rather than looking at the 3rd Party IP market as a monolithic segment, tracking revenues by year, Semico Research Corp. has analyzed the IP market by functional category and then sub-divides these categories into revenues by quarter. This analysis of the data and some additional data on design starts, IP costs and SoC unit shipments. Semico has arranged the IP market by the following IP types:

Memory CPU Core DSP Core
Graphics Analog Interface
Logic / Embedded Analytics Chip Enhancement Interconnect
Security Audio eFPGA

IP Market on an IP Category Basis

  • The CPU market is the largest IP category, and it will remain so for the foreseeable future, accounting for 34.4% of total market revenues by 2025.
    • A new CPU architecture has been introduced by ARM that is focused on processing data around safety standards and implementing those protocols in a system. This continues the theme of developing CPU architectures that are not general purpose but are specific in nature such as processors for vision, signal processing, graphics processing and video processing.
    • With the rise of Artificial Intelligence (AI) applications through the deployment of Convolutional Neural Networks (CNNs), the market is seeing the creation of CPU IP architectures created specifically with AI functions in mind.
    • The introduction of ‘free’ Instruction Set Architectures (ISAs) such as RISC-V and open-source versions of the MIPS architecture is expected to continue invigoration of the discovery and exploration phase the market is now enjoying driven by AI applications and other silicon solutions.
  • Memory IP was the 2nd largest category in 2020 driven by the need for increased resources on SoCs to handle the large number of CPU cores in silicon solutions. By 2025, the Memory IP market will account for 11.4% of total market revenues.
    • New embeddable memory architectures as licensable IP entered the market two years ago. MRAM is being supported by TSMC, GLOBALFOUNDRIES, SMIC and UMC. Other non-volatile memory types such as STTRAM, ReRAM, CBRAM and FeRAM are entering the market. In addition, a new memory type based around carbon nanotubes (CNTs) is also starting to gain some traction.
    • Presumably, Intel’s reentry into the silicon foundry market will include support for MRAM and many other new memory architectures.
  • Interface was the 3rd largest IP category in 2020, with an 11.0% share of the market driven by the need for faster interfaces to move the enormous amount of data created through AI. However, by 2025, the Interface category’s share of the market will fall to 9.6%, having been overtaken by GPU IP.
  • Graphics was the 4th largest IP category in 2020 driven by the increasing need for SoCs to incorporate embedded vision functions for AI and to process data locally. The Graphics market will account for 10.5% of total IP revenues by 2025, overtaking the Interface category.
  • A licensable programmable logic fabric has entered the market from several different companies and could generate significant revenues in the SoC market as adoption increases and the technology matures.
    • While starting from a small base, eFPGA will have the highest growth rate with a CAGR of 68.4% through 2025.
  • The market for Security is mid-sized today but will grow in importance and necessity as portable wireless devices permeate our society and require increased protection from viruses and denial-of-service attacks. This will be especially true to secure data in the IoT market. This category has one of the highest growth rates after eFPGA.
    • The industry discussion as to the need for Security IP has ended in the affirmative. Now the discussion has shifted to how much Security IP in the silicon is enough.
  • From 2020 to 2025, Licensing revenues will increase at a faster rate than will Royalty revenues due to the new wave of architectural refreshes and exploration initiated by designers looking to incorporate of AI functionality in their silicon solutions. This signals continued market growth over the forecast period of 2021 – 2025.
    • This is a change from the recent past when royalty revenue growth outstripped that of licensing revenues.
    • Once SoCs using these new architectures start shipping in volume, Semico anticipates that royalties will once again be somewhat larger than licensing revenues.

By all measures, the IP market today is not a mature market, nor is it nearing maturity. This is evident simply by looking at its dynamic nature and the level of innovation IP users are asking for and the IP vendors are delivering. In a mature market this would not be possible since lower growth levels would not allow for adequate resources to achieve the innovation this market is delivering. Semico believes as the end markets that IP serves evolve, so too will the IP market evolve.

Taken from the report, Figure 175 shows the total IP market by product category by quarter. Semico forecasts that the IP market will exceed $2.0B in revenue per quarter by 2Q22 and continue to reach $2.7B by 4Q25, a CAGR of 8.9% by the end of the forecast period.

Figure 175: Total IP Market Revenues Actual and Forecast by Quarter, 1Q06 – 4Q25

*Forecast  Source: Semico Research Corp.

The lid has been removed from the innovation box and we all will benefit with better, more integrated and higher-performance products!

Semico has recently written a report covering this topic:  Licensing, Royalty and Service Revenues for 3rd Party IP: 2021 Market Analysis and Forecast (SC105-21) April 2021

Here is a link to the Table of Contents on our website:

https://semico.com/content/licensing-royalty-and-service-revenues-3rd-party-ip-2021-market-analysis-and-forecast


Magwel Adds Core Device Checking for ESD Verification

Magwel Adds Core Device Checking for ESD Verification
by Tom Simon on 05-11-2021 at 10:00 am

ESDi-XL Core checking

In the past ESD sign-off has been accomplished by a combination of techniques. Often ESD experts are asked to look at a design and assess its ESD robustness based on experience gained from prior chips. Alternatively, designers are told to work with a set of rules given to them, again based on previous experience about what usually works and what fails. Tools can come into the mix, many of them using widely ranging methods with widely ranging success. Indeed, engineering teams looking to buy ESD tools are confronted with a confusing set of solutions that may or may not find problems, and just as importantly may report numerous false errors. Some tools require multiple iterations to find real issues, or simply take teams forever to run and review results because of an inability to filter out false violations.

Magwel has been delivering a solid ESD solution for HBM verification for many years. Magwel’s ESDi tool strikes the perfect balance of comprehensive checking without creating burdensome simulation workloads. As a result, it reports fewer false positives and gives designers the tools to rapidly trace, debug and fix any issues it finds.

Tool Choices

As was mentioned before, engineers often have to choose between apples and oranges type choices, in the hopes they pick the right tool. Some tools use simple loop resistance to find potential problem paths, then require more detailed simulation to assess the real level of severity. This approach can miss problem paths altogether. Other solutions rely on rules to detect issues. However, the quality of the verification depends heavily on the specifics of the rules. New problems can occur that are not anticipated by the existing rules. Another approach is to rely too heavily on voltage propagation. While this is a step in the right direction, it can miss the nuances that many designs present.

Simulation Approach

Magwel’s approach has always been to use thorough and fast simulation, with easily obtainable TLP models for the ESD devices. It has a built-in highly accurate extraction engine tuned for ESD analysis. ESDi looks at each and every pad-pair (or pins in the case of IP) to see where problems are occurring. Self-protecting devices are also easily modeled. Because it uses comprehensive simulation, it can handle multiple parallel discharge paths, which ultimately affect current distribution and voltage levels across devices. Performance is boosted by parallel processing. ESDi typically simulates an HBM test in a fraction of a second and can perform up to 10K tests per hour per parallel thread.

ESDi also checks for missing vias or wires which may lead to unconnected ESD devices, as well as many other common layout issues that can cause ESD related failures. It handles chips with multiple power domains and checks for electro-migration issues.

ESDi-XL Flow

To improve overall ESD design and verification Magwel has just announced a set of new features that expand the ability to detect issues and improve the effectiveness of both front-end and back end design teams. With ESDi-XL design teams can now get an early look at ESD robustness during schematic design. Early design cycle insights into ESD protection effectiveness can save precious design time and avoid unnecessary iterations.

New Analysis Methods

Perhaps most important of all the new features in EDSi-XL is the addition of IO cell and core checking for overvoltage and overcurrent conditions during ESD discharge events. ESDi-XL already has an excellent ability to predict voltage and current flows in IO and ESD cells. Magwel applies this information and uses a proprietary algorithm to rapidly detect when any core devices would be exposed to overcurrent or overvoltage in the course of an ESD discharge event. This is extremely important because even with ESD protections working the potential for internal device damage continues to exist in many designs. Unfortunately, up until now the only way to find these issues was with massive time-consuming simulations or after tapeout on the tester. Magwel’s approach is fast and accurate and can save projects from having to make respins.

ESDi-XL Core checking

ESDi-XL also performs new expert topological checks, such as the presence and value of protection resistors at gate inputs, presence of secondary protections or W/L aspect ratios of stacked devices.

Conclusion

Magwel’s ESDi-XL brings high speed and accurate ESD analysis to the entire flow. It can quickly replace and supplement methods such as cursory checks, tedious simulation and manual review, all of which can be impractical or error prone. If you are going to buy an ESD tool, it makes sense to do it before you experience a project delay or failure. For more information on Magwel’s ESDi-XL visit their website.

 


Cadence Extends Tensilica Vision, AI Product Line

Cadence Extends Tensilica Vision, AI Product Line
by Bernard Murphy on 05-11-2021 at 6:00 am

Tensilica vision min

Vision pipelines, from image signal processing (ISP) through AI processing and fancy effects (super-resolution, Bokeh and others) has become fundamental to almost every aspect of the modern world. In automotive safety, robotics, drones, mobile applications and AR/VR, what we now consider essential we couldn’t do without those vision capabilities. Since Cadence Tensilica Vision platforms are already used in some impressive applications from companies including Toshiba, Kneron, Vayyar and GEO Semiconductor, so when they extend their vision and AI product line, that’s interesting news.

Fast evolution

Remember that this is a fast-moving space. You can already find phones with 6 cameras to support digital zoom and higher quality than you could get out of a single (phone-sized) camera. AR headsets that will let you measure depths through time-of-flight sensing (supported by a laser/LED). And, by extension, distances through a little trigonometry. Which can be invaluable in personal or work-related AR applications where you need to capture dimensions.

ISPs themselves are evolving rapidly because the quality of the image they produce critically affects the quality of recognition in the subsequent AI phase. This isn’t just about a pleasing picture. Now you must worry about whether the camera can distinguish a pedestrian stepping off the sidewalk in difficult lighting conditions. Before you even get to the AI phase. Dynamic range compression is a hot area here.

Then there’s AI, a world which continues to raise the bar on innovation in so many ways. This part of the pipeline is constantly advancing, spinning new neural net architectures to boost performance for safety critical applications. And to reduce power for most applications, but especially at the edge.

And finally post-processing. Bokeh for a nice background blur around that picture of your kids. Merging a real view with an augmented overlay (suitably aligned) for an AR headset or glasses. Or consider SLAM processing for that robot vacuum to navigate around your house. Or a robot orderly to navigate around a hospital, delivering medications and meals to patients. SLAM works largely through vision, building a map on the fly and correcting frequently to guide navigation. Curiously, SLAM doesn’t yet depend on AI, though there are indications AI is starting to appear in some applications.

What it takes

All of this means multiple high-resolution video streams, plus perhaps time-of-flight sensor data, converging first into an ISP, requiring very intensive signal processing. Then into pre-processing to build say a 3D point cloud. Then perhaps into SLAM for all that localization and mapping. These are massive linear algebra tasks, generally at least requiting single precision floating point, sometimes double.

The AI task is becoming a little more familiar. Sliding windows over massive fixed-point convolution, RELU and other operations across many neural net planes. Requiring heavy parallelism and lots of MAC (multiply-accumulate) primitives. With as much of the computation as possible staying in local memory because off-chip memory access is even uglier for AI power and performance than for regular algorithms.

Then you must fuse those inputs to enhance accuracy in object recognition through redundancy (low false positives and negatives), to compute depths and dimensions and whatever other conclusions could be derived from these images. Doing all of this requires a platform that is very fast, supporting all that signal processing, linear algebra and convolution. And very flexible because the algorithms continue to evolve. A platform which can also support hardware differentiation, to make your product better than your competitors’ offerings. The only way that I know to fit that profile is with embedded customizable DSPs.

A spectrum of solutions

The range of applications an embedded solution like this must support demands both high performance and low power options. Tensilica already provides their Vision Q7 and Vision P6 platforms for high throughput and low power respectively. Now they have extended the family. The Vision Q8 offers 2X performance on the Q7 in computer vision, AI and floating point and addresses high end mobile and automotive applications. The Vision P1 offers a third of the power and area of the P6 and is targeted to always-on applications (face recognition, smart surveillance, video doorbell, …). Sensors for these applications will trigger (on movement or proximity for example) a wakeup call to the app.

Both processors use the same SIMD and VLIW architecture used in the Q7 and P6, along with the same software tools, library and interfaces. OpenCL, Halide, C/C++ and OpenVx for computer vision, all the standard networks for AI.

And this is really cool. Suppose you have your own AI acceleration hardware. Not the full accelerator but some part of it where you will add your own special sauce. The Tensilica platforms will operate as the AI master engine but can offload those planes to your special hardware. Which then return to the master when they are done. The compile flow through Tensilica XNNC-Link supports this division of labor starting from a common input.

You can learn more about these Tensilica platforms HERE.

Also Read

Agile and Verification, Validation. Innovation in Verification

Cadence Dynamic Duo Upgrade Debuts

Reducing Compile Time in Emulation. Innovation in Verification


Webinar: System Level Modeling and Analysis of Processors and SoC Designs

Webinar: System Level Modeling and Analysis of Processors and SoC Designs
by Daniel Payne on 05-10-2021 at 10:00 am

exploration flow min

Engineers love to optimize their designs, but that implies that there are models and stimulus to automate the process.  Process engineers have TCAD tools, circuit designers have SPICE for circuit simulation, logic designers have gate-level simulators, RTL designers use logic simulation, but what is there for the system architects of a processor or SoC design? Even back in the 1980s at Intel, I recall that the architect coded a GPU architecture in the MainSail language and then pushed some stimulus through it to find out what the performance and bottle-necks would be, all prior to any detailed implementation, but that was a lot of error-prone, hand-coding required.

In 2021 there’s a system engineering company called Mirabilis Design, and they have focused on providing a system architect with a modeling and analysis environment to do actual exploration, and make the trade-offs to pick the winning architecture. I spoke with the founder of Mirabilis Design, Deepak Shankar to learn about his upcoming webinar.

Webinar

System architects have high-level questions that need to be answered, like: how will my SoC respond to network traffic, what is the Quality Of Service (QOS), and how should the Network On Chip (NOC) be configured?

I learned that the approach from Mirabilis is to use a system-level simulator, along with a library of 500 models for things like: queuing, networking, ARM M1, RISC V, etc. Most of these models are parameterized, like a scheduler, so you can get the proper configuration. If you wanted a system to have an Arteris NOC, ARM M1 core and use LPDDR5 interface to RAM, then how would they all work together, and how should the NOC be setup?

If the block that you want isn’t already modeled in the library, then there’s a way for you to quickly build your own, or even modify the source code of an existing library block.

In the webinar you’ll get to see two cases where The VisualSim environment is used to evaluate the requirements, power, performance and function of a Processor and an SoC design. The architectural exploration flow with this approach looks like this:

What struck me most with Mirabilis was that architects can now get early access to throughput, performance, power and even timing, all before detailed implementation is started. Power estimation and performance modeling are no longer split between two different groups of engineers, with two different sets of tools.

Results from VisualSim on the power estimation side are typically within 5-7% of what you’ll measure in silicon, and that’s quite valuable, because other approaches are lucky to be within 50% of silicon values.

Mark your calendar for May 27th, from 10AM to 11AM PDT, then sign up online for this informative webinar from Mirabilis Design.

Mirabilis

Mirabilis Design provides modeling, exploration and collaboration solutions for semiconductors, digital electronics and Embedded Systems. Clientele includes a mix of semiconductor, defense, aerospace, automotive and computing product suppliers. 6 of the top 12 semiconductor companies, 8 of the top 15 defense suppliers and 4 of the top 10 electronics companies use VisualSim to ensure the right design for their products.

Also Read:

WEBINAR: Balancing Performance and Power in adding AI Accelerators to System-on-Chip (SoC)

Webinar – Comparing ARM and RISC-V Cores

System-Level Modeling using your Web Browser


Samtec Keynote – Power Integrity is the New Black Magic

Samtec Keynote – Power Integrity is the New Black Magic
by Mike Gianfagna on 05-10-2021 at 6:00 am

Samtec Keynote – Power Integrity is the New Black Magic

The Signal Integrity Journal recently held a half day Electronic Systems SI/PI Forum that included presentations from industry leaders covering key design topics for signal integrity and power integrity engineers. The event was sponsored by Cadence. The keynote for the event was presented by Istvan Novak, principal signal and power integrity engineer at Samtec. Istvan presented some observations and revelations that will definitely make you stop and think. It was quite a memorable talk. If you missed it, don’t worry. A replay link is coming. But first, let’s look at some of the comments on why power integrity is the new black magic.

Istvan Novak

First, a bit about the speaker. Istvan Novak works on advanced signal and power integrity designs. Prior to 2018 he was a distinguished engineer at SUN Microsystems, later Oracle. He worked on new technology development, advanced power distribution, and signal integrity design and validation methodologies for SUN’s successful workgroup server families. He was engaged in the methodologies, designs and characterization of power-distribution networks from silicon to DC-DC converters. He is a Life Fellow of the IEEE with twenty-nine patents to his name, author of two books on power integrity, teaches signal and power integrity courses, and maintains a popular SI/PI website. Istvan was named Engineer of the Year at DesignCon 2020. If power integrity is of interest to you, Istvan is someone you will want to listen to.

Istvan began by explaining the motivation for the title of his talk. Before the 1990’s, electromagnetic compatibility (EMC) was a key focus. In the early 1990’s, signal integrity became a new area of focus and a defined discipline. In 1994, Dr. Howard Johnson famously described signal integrity challenges as “black magic” in his textbook, which is still in circulation today. Some industry experts believe that as signal integrity has matured, power integrity has now become the new black magic.  Samtec is no stranger to either signal or power integrity by the way. Dan and I discussed signal integrity with Matt Burns of Samtec in this podcast.

Istvan examines the reasons why power integrity is so difficult as he analyzes past predictions and current challenges.  The safety and reliability concerns brought on by the proliferation of power electronic circuits in all walks of life are discussed, from tiny energy-harvesting circuits, through consumer electronics products, to high-power electronics in autonomous vehicles.

Istvan provides an example early in his talk.  He discusses the widespread power blackout on the east coast of the US in 2003 as a substantial example of what can go wrong. This massive chain reaction failure was due to a power integrity problem. Istvan goes on to discuss the impact that an increasing number of supply rails has on power distribution network (PDN) design. The increasing density of these systems increases noise, and this is a key challenge.

Looking more closely at signal integrity vs. power integrity, Istvan points out that signal integrity tends to be a one-dimensional problem. Meaning the signal path is typically well defined and the parameters associated with maintaining the signal are also known. Contrast that with power integrity, where power distribution is done over the entire chip with normal and wide traces as well as power planes. In this case, the distribution of effects is much more of a 2D problem and the particular mechanisms at play come from noise, which is harder to characterize.

Istvan goes on to discuss other challenges associated with power integrity and how to characterize it accurately. He cites several design examples that do a great job to illuminate what needs to be looked at. I highly recommend you watch his keynote if power integrity is on your mind. You will come to understand why power integrity is the new black magic. You can see Istvan’s keynote here.


Mars Perseverance Rover Features First Zoom Lens in Deep Space

Mars Perseverance Rover Features First Zoom Lens in Deep Space
by Synopsys on 05-09-2021 at 10:00 am

Mars Perseverance Rover Features First Zoom Lens in Deep Space

On July 30, 2020, NASA launched the Mars 2020 Perseverance rover, which is scheduled to land today. Perseverance has been deployed to Mars with a new mission: to search for evidence of past life and collect samples that will eventually be brought back to Earth by future missions.

Mars 2020 Perseverance rendering courtesy of NASA/JPL-Caltech

According to NASA, the Perseverance Mars mission “takes the next step by not only seeking signs of habitable conditions on Mars in the ancient past, but also searching for signs of past microbial life itself.” The Perseverance rover is similar in size and design to the Curiosity rover – about the size of a compact car –but features new camera systems to facilitate rock and soil sample collection.

One of these is the Mastcam-Z instrument, which functions as the rover’s mast-mounted scientific “eyes.” The “Z” in Mastcam-Z stands for “zoom.” It represents a milestone in the history of space exploration: it’s the first zoom lens system to be included on a deep space instrument.

Mastcam-Z: Powerful Zoom Capabilities

The Mastcam-Z instrument is an update of the Mastcam instrument on the Mars Curiosity rover. It will perform several key functions:

  • Scanning the landscape on Mars to help scientists understand the terrain.
  • Assessing atmospheric and astronomical conditions.
  • Helping scientists identify and characterize materials for rock and soil sampling.

Mastcam-Z is capable of producing multispectral, stereoscopic images. With its powerful zoom, it will help scientists see small features on the Mars landscape from far away. To give an idea of its power, it is capable of resolving features as small as 3cm from a distance of 100m.

Synopsys optical engineers partnered with Malin Space Science Systems and Arizona State University to design the Mastcam-Z zoom lens system using Synopsys’ CODE V optical design software. There were many technical challenges to resolve. The lenses needed to be well corrected over an extended visible spectral range and needed to operate over at least a 3x zoom range while being able to focus from close to the rover out to infinity. The lenses also had to operate over a large temperature range, including extreme temperature gradients. This set of operating conditions required substantial design effort as well as extremely detailed analyses. Synopsys optical engineers needed to show that the lenses could be successfully fabricated and that they would function over all the operating conditions.

Dr. Jim Bell, principal investigator at Arizona State University, commented, “The Mastcam-Z science and instrument development teams were extremely pleased with the high level of technical skill and support provided by the Synopsys team designing the zoom lens system. The result is an amazing pair of cameras that are expected to give us high resolution color and even 3-D views of Mars.”

Dr. Michael Ravine, advanced projects manager at Malin Space Science Systems, said, “Synopsys supported the Mastcam-Z development from the proposal through our final testing. We were pleased with how well the zooms worked under simulated Mars conditions, and we’re looking forward to seeing them actually working on Mars.”

Dr. Blake Crowther, principal optical engineer at Synopsys Optical Solutions Group, said, “One of the biggest challenges associated with designing lenses for use in interplanetary missions is the multivariate nature of their operating environment — coupled with the fact that they must work the first time and every time without human attention. This is compounded when designing a zoom lens that must function over a significant range of object distances. The amount of detail that the designer must keep in mind over the design process is incredible. In every phase of the design, the optical engineer needs to be able to communicate complex design trades and detailed analyses results to a large and diverse review community, which is no small endeavor. It was an honor to design such a lens with the talented team assembled for the job. It was also fun.”

Illuminating Sample Collection

Another updated camera system on the Perseverance Mars rover is the CacheCam, one of several engineering cameras on board. The CacheCam is located underneath Perseverance and takes pictures of sampled materials as they are being prepared for sealing and caching. Synopsys optical engineers contributed to the design of illumination optics on the CacheCam.

The CacheCam includes a fixed illuminator (no moving parts) to provide close to uniform illumination of the materials throughout the collection process; at the beginning, the samples are far from the imaging optics and, at the end, the samples are much closer. In addition, the illuminator has to account for the presence of dust on the outer surface of the optics, as well as on the inner surfaces of the collection tube.

Simon Magarill, one of the engineers who worked on the CacheCam design, noted, “The  requirement to include the dust in the analysis and design process required a lot of calculations to account for different sizes and concentrations of scattering particles. We developed a systematic approach to design an illuminator that provides optimum performance in such challenging conditions.”

Perserverance CacheCam image. Courtesy of NASA/JPL-Caltech.

Learn More

A great place to start learning more about the Mars 2020 Perseverance rover is the mission overview on NASA’s website at https://mars.nasa.gov/mars2020/mission/overview/.

References

Also Read:

Verification Management the Synopsys Way

Synopsys Debuts Major New Analog Simulation Capabilities

Accelerating Cache Coherence Verification


Is IBM’s 2nm Announcement Actually a 2nm Node?

Is IBM’s 2nm Announcement Actually a 2nm Node?
by Scotten Jones on 05-09-2021 at 6:00 am

Slide1

IBM has announced the development of a 2nm process.

IBM Announcement

What was announced:

  • “2nm”
  • 50 billion transistors in a “thumbnail” sized area later disclosed to be 150mm2 = 333 million transistors per millimeter (MTx/mm2).
  • 44nm Contacted Poly Pitch (CPP) with 12nm gate length.
  • Gate All Around (GAA), there are several ways to do GAA, based on the cross sections IBM is using horizontal nanosheets (HNS).
  • The HNS stack is built over an oxide layer.
  • 45% higher performance or 75% lower power versus the most advanced 7nm chips.
  • EUV patterning is used in the front end and allows the HNS sheet width to be varied between 15nm to 70nm. This is very useful to tune various areas of the circuit for low power or high performance and also for SRAM cells.
  • The sheets are 5nm thick and stacked three high.

Is this really “2nm” as claimed by IBM? The current leader in production process technology is TSMC. We have plotted TSMC node names versus transistor density and fitted a curve with a 0.99 R2 value, see figure 1.

Figure 1. TSMC Equivalent Nodes.

Using the curve fit we can convert transistor density to a TSMC Equivalent Node (TEN). Using curve fit we get a TEN of 2.9nm for the IBM announced 333MTx/mm2. In our opinion this makes the announcement a 3nm node, not a 2nm node.

To compare the IBM announcement in more detail to previously announced 3nm processes and projected 2nm processes we need to make some estimates.

  • We know the CPP is 44nm from the announcement.
  • We are assuming a Single Diffusion Break (SDB) that would result in the densest process.
  • Looking at the cross section that was in the announcement, we do not see Buried Power Rails (BPR), BPR is required to reduce HNS track height down to 5.0, so we assume 6.0 for the process.
  • To get to 333MTx/mm2 the Minimum Metal Pitch must be 18nm, a very aggressive value likely requiring EUV multipatterning.

IBM 2nm Versus Foundry 3nm

Figure 2 compares the IBM 2nm devise to our estimates for Samsung and TSMC 3nm processes. We know Samsung is also doing a HNS and TSMC is staying with a FinFET at 3nm. Samsung and TSMC have both announced density improvements for their 3nm processes versus their 5nm processes so we have known transistor density for all three companies and can compute TEN for all three. As previously noted, IBM’s TEN is 2.9, we now see Samsung’s TEN is 4.7 and TSMC’s TEN is 3.0 again reinforcing that IBM 2nm is like TSMC 3nm and Samsung is lagging TSMC.

The numbers in red in figure 2 are estimated to achieve the announced densities, We assume SDB for all companies. TSMC has the smallest track height because a FinFET can have a 5.0 track height without BPR, but HNS needs BPR to reach 5.0 in BPR isn’t ready yet.

Figure 2. IBM 2nm Versus Foundry 3nm.

IBM 2nm Versus Foundry 2nm

We have also projected Samsung and TSMC 2nm processes in figure 3. We are projecting that both companies will use BPR (BPR is not ready yet but likely will be when Samsung and TSMC introduce 2nm around 2023/2024). We also assume that Samsung and TSMC will utilize a forksheet NHS (HNS (FS) architecture to reach a 4.33 track height relaxing some of the other shrink requirements. We have then projected out CPP and MMP based on the company’s recent shrink trends.

Figure 3. IBM 2nm Versus Foundry 2nm.

 Power and Performance

At ISS this year I estimated relative power and performance for Samsung and TSMC by node with some additional Intel performance data. The trend by node is based on the companies announced power and performance scaling estimates versus available comparisons at 14nm/16nm. For more information see the ISS article here.

Since IBM compared their power and performance improvements to leading 7nm performance I can place the IBM power and performance on the same trend plots I previously presented, see figure 4.

Figure 4. Power and Performance (estimates).

 IBM’s use of HNS yields a significant reduction in power and makes their 2nm process more power efficient than Samsung or TSMC’s 3nm process, although we believe once TSMC adopts HNS at 2nm they will be as good or better than IBM for power. For performance we estimate that TSMC’s 3nm process will outperform the IBM 2nm process.

As discussed in the ISS article these trends are only estimates and are based on a lot of assumptions but are the best projections we can put together.

Conclusion

After analyzing the IBM announcement, we believe their “2nm” process is more like a 3nm TSMC process from a density perspective with better power but inferior performance. The IBM announcement is impressive but is a research device that only has a clear benefit versus TSMC’s 3nm process for power and TSMC 3nm will be in risk starts later this year and production next year.

We further believe that TSMC will have the leadership position in density, power, and performance at 2nm when their process enters production around 2023/2024.

Also Read:

Ireland – A Model for the US on Technology

How to Spend $100 Billion Dollars in Three Years

SPIE 2021 – Applied Materials – DRAM Scaling


Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications

Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications
by Daniel Nenni on 05-07-2021 at 10:00 am

Dan and Mike are joined by Sudhir Mallya, vice president of corporate and product marketing at OpenFive. We explore 2.5D design and the role chiplets play. Current technical and business challenges are discussed as well as an assessment of how the chiplet market will develop and what impact it will have.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Sudhir Mallya is Vice President of Corporate and Product Marketing. He is responsible for custom silicon product marketing, technology roadmaps and business model innovation, corporate marketing initiatives, and strategic customer and partner alliances. He was previously at Toshiba where he led their North American silicon BU with a focus on data center and automotive applications. He is based in Silicon Valley and has held executive positions in engineering, marketing, and business development at leading semiconductor companies. He has led multiple$100M+ global strategic customer engagements from very early concept to high volume production. He has a BSEE from the Indian Institute of Technology, Bombay, and an MSEE from the University of Cincinnati.