Bronco Webinar 800x100 1

Benefits of a 2D Network On Chip for FPGAs

Benefits of a 2D Network On Chip for FPGAs
by Tom Simon on 04-12-2022 at 10:00 am

Achronix FPGA 2D NoC

The reason people love FPGAs for networking and communications applications is because they offer state of the art high speed interfaces and impressive parallel processing power. The problem is that typically a lot of the FPGA fabric resources are used simply to move the data on or off and across the chip. Achronix has cleverly employed a two-dimensional (2D) Network on Chip (NoC) to offload this task from the FPGA fabric, freeing up significant area and offering better throughput and speed for all data transfers.

NoC Configuration

With claims like theirs it can be useful to see actual benchmark results that show the tangible benefits. First let’s start by describing their 2D NoC. In the Speedster7t FPGA, Achronix has implemented the NoC as 8 rows and 8 columns evenly spaced across the chip, each with two sets of unidirectional AXI compatible data paths that are 256 bits wide – all operating at 512 Gbps.

The 2D NoC can transfer data to and from the chip’s external interfaces, which include PCIe Gen5, GDDR6 DDR4/5 and Ethernet. The NoC not only supports a packet-based, master/slave transaction model, it also supports Ethernet data streams. In fact, the 2D NoC can move data from the Ethernet interface to the DDR memory without requiring any resources in the FPGA Fabric. This enables the Speedster7t with the 2D NoC to support 400 Gbps Ethernet with ease.

NoC vs FPGA Fabric Data Routing

To demonstrate several important aspects of the 2D NoC, Achronix has posted a video that goes through a stress test to see how the NoC performs in the real world. The test uses a data generator on one end of a NoC row/column and also has loopback logic on the other end. At the end of the loop there is a transaction checker. Each row and each column are fitted with this bi-directional configuration. The data generator, loopback logic and transaction checker are implemented in the FPGA fabric, which accesses the NoC through a Network Access Point (NAP).

The same set up was done for comparison using the FPGA fabric to route the data across the chip for each row and column. Without the NoC 40% more FPGA resources were needed to perform the routing across the chip. Even though the performance was equal at ~4.6 Tbps, the compile time for the design was 40% less for the NoC versus the FPGA data routing.

Visualizing FPGA Performance

The video highlights the two chips operating with monitoring attached to show data rates. Also, the loading of the entire NoC is shown visually in the Achronix tools. All the columns and rows showed green, meaning that they are well under full capacity in this particular test. The data rate in this test is determined by the data generator in each row/column.

Achronix FPGA 2D NoC Performance

Achronix has other examples in their latest white paper and 2021 webinar on their website of the efficiency and speed of using a NoC. For instance, in a case with internal congestion due to the addition of processing elements such as encryption/decryption, etc. a design with FPGA based routing may have to detour data routes around the congested areas. This can only add to timing closure headaches.

Conclusion

A high speed NoC offers a painless method of moving data at high speed, helping to fulfill the promise of FPGAs in data intensive applications. The 2D NoC on Achronix FPGAs offers high capacity and bandwidth combined with ease of implementation and rapid design closure. Seeing a heavily loaded stress test makes clear what is possible with the Achronix Speedster7t FPGA. The video is available on the Achronix website. Achronix also has several blogs that go into the specifics of their 2D NoC and how it can be used for 400 Gpbs Ethernet or other applications that perform compute intensive operations on data streams.

Also read:

5G Requires Rethinking Deployment Strategies

Integrated 2D NoC vs a Soft Implemented 2D NoC

2D NoC Based FPGAs Valuable for SmartNIC Implementation

 


Spatial Audio: Overcoming Its Unique Challenges to Provide A Complete Solution

Spatial Audio: Overcoming Its Unique Challenges to Provide A Complete Solution
by Kalar Rajendiran on 04-12-2022 at 6:00 am

20 Cones of Confusion

“If a tree falls in a forest and no one is around to hear it, does it make a sound?” is a philosophical thought experiment that raises questions regarding observation and perception [Source: Wikipedia]. Setting aside the philosophical aspects, if one wasn’t present where a sound was generated, the sound was lost forever. That’s until the advent of audio related technologies, starting with the microphones and loudspeakers. One could be seated in a far corner of a very large auditorium and still be able to hear a speech being delivered from the podium. Audio technology has advanced a lot since its early days. Loudspeakers have progressed through stereo, quad, 5.1, 7.1, large speaker arrays, Ambisonics, and Dolby Atmos. Headphones have advanced through stereo and multi-driver to binaural.

While audio technologies have progressively brought the listener closer to an aural immersion experience, they do not fully mimic the natural world experience. There is more to that experience than just hearing a sound. The phrase “you had to be there” to experience it has truth to it. This experience is termed 3D or Spatial Audio experience. The rendering audio technology should also sense the listener’s movement relative to the assumed location of sound source and continuously provide a realistic experience. Gaming, augmented reality and virtual reality have introduced more challenges to overcome for achieving a realistic aural experience.

So, how to mimic the aural experience of the natural world and the virtual world with audio electronics?

Recently, CEVA and VisiSonics co-hosted a webinar titled “Spatial Audio: How to Overcome Its Unique Challenges to Provide A Complete Solution.” The presenters were Bryan Cook, Senior Team Leader, Algorithm Software, CEVA, Inc. and Ramani Duraiswami, CEO and Founder, VisiSonics, Inc. Bryan and Ramani explained spatial audio, how it works, the importance of head tracking, challenges faced and their company’s respective offerings for a complete solution.

The following are some excerpts based on what I gathered from the webinar.

Spatial/3D Audio

While surround sound technology renders a good listening experience, the sound itself is mixed for a given sweet spot. The listener looking in a direction other than perfectly forward breaks the immersion of surround sound. Audiovisual media lose realism. Video games miss crucial location information pertinent to virtual survival.

Sound in a real world scenario comes from all directions: up, down, left, right, rear and front. And the sound source typically stays fixed while the listener may be moving. Spatial/3D audio experience is one that reproduces a realistic aural experience of the real world and the virtual world as the case may be. Spatial audio technology is being deployed in music, gaming, audio/visual presentations, automotive and defense applications. The technology is delivered primarily via headphones/TWS earbuds, but also through smart speakers/sound bars, and AR/VR/XR devices.

How does Spatial Audio Work?

Experiencing spatial sound relies on some primary and secondary cues.

The primary cues are based off of interaural time difference (ITD) and interaural level difference (ILD) relative to the distance between the source and each ear. ITD is the difference in arrival time of a sound at each ear. ILD is the difference in volume of a sound at each ear.

The secondary cues are the position dependent frequency changes to the sound. The shape of the listener’s head, ears and shoulders amplify or attenuate sounds at different frequencies. While low frequency sounds are either not affected or affected consistently, high frequency sound transformations are dependent on ear shape. But mid-frequency sound transformations are dependent upon the head and body shape. The overall effects of these transformations can be in the order of tens of dB.

As such, head-tracking and capturing the impact on primary and secondary cues becomes essential for delivering a realistic spatial audio experience. These effects are captured and modeled as Head Related Transfer Functions (HRTF). Without head tracking, the sound source will move with the head motion, causing the spatial audio cues to remain unchanged. With head tracking, the sound sources will be held stationary in the digital world. This recreates the real world situation and improves the effectiveness of the spatial audio experience that is rendered. Head tracking also helps with disambiguating the location of a sound source when multiple locations can produce the same primary spatial audio cues. This is referred to as the cones of confusion effect.

 

Technical Challenges to Implementing an Effective Spatial Audio system

There are two latencies that come in the way of delivering effective spatial audio experience. The first is the audio latency. This relates to the time it takes for the audio playback to be sent to the headphones. For pre-recorded music, audio output latency doesn’t matter. For movies and games, a large audio output latency can lead to lip-sync issues. The second is the head tracking latency. This relates to the time that passes from the point of moving the head to when the audio changes to reflect this change.

When head tracking is not processed locally on the headphone device itself, a large latency can be introduced. For example, Apple AirPods Pro head tracking latency is more than 200 ms because sensor information is transferred to the phone for processing. For this 200 ms latency, the head motion information doesn’t even get processed for correcting the perceived source direction. This makes it hard to localize perceived source direction leading to over 60 degrees of error. The result is an erratic spatial audio experience, particularly during large or frequent head movements.

Addressing the Technical Challenges For a Robust, Complete Solution

A better spatial audio experience can be delivered with a low head tracking latency. Low latencies can be achieved with local audio processing on the headphones. This approach eliminates wireless transmissions in the head tracking processing path. CEVA’s reference design delivers less than 27 ms of head tracking latency through the effective use of CEVA-X2 DSP for local spatial processing.

With a 27 ms latency, less than a ten degree error in perceived source direction is achievable. A low latency also helps with disambiguating the location of the sound source with respect to the cones of confusion discussed earlier.

The Figure below shows the inertial measurement unit (IMU) sensors useful for head tracking purposes and the corresponding reasons.

Complete Solution from CEVA and VisiSonics

A spatial audio system takes in the audio input, head tracking input, and the head related transfer function (HRTF) for processing into spatial audio output.

CEVA’s MotionEngine® software enables high accuracy head tracking. The CEVA-X2 DSP core enables low latency head tracking. The head tracking sensing itself is enabled by CEVA’s IMU sensors, sensor fusion software and algorithms and activity detectors.

VisiSonics’ RealSpace® 3D Spatial Audio technology easily integrates into headphones for personalizing HRTFs for mobile devices and VR/AR/XR applications.

For all the details from the webinar, you can listen to it in its entirety. If you are looking to add spatial audio capabilities to your audio electronics products, you may want to have deeper discussions with CEVA and VisiSonics.

Also read:

CEVA PentaG2 5G NR IP Platform

CEVA Fortrix™ SecureD2D IP: Securing Communications between Heterogeneous Chiplets

AI at the Edge No Longer Means Dumbed-Down AI


ISO 26262: Feeling Safe in Your Self-Driving Car

ISO 26262: Feeling Safe in Your Self-Driving Car
by Daniel Nenni on 04-11-2022 at 10:00 am

ISO 26262

The word “safety” can mean a lot of different things to different people, but it’s a word we hear frequently when the topic involves automobiles. In contrast, “functional safety” has a long-established meaning in the design of electrical and mechanical systems: an automatic protection mechanism with a predictable response to failure. When a critical component fails, a functionally safe car either compensates and continues to operate properly or shuts down in a safe manner (such as slowing down and pulling off the road).

The ISO 26262 standard lays out a bunch of functional safety requirements for anyone designing an electrical or electronic system for use in road vehicles. I’ve been seeing many more references to ISO 26262 in the last few years, partly driven by the intense interest in self-driving cars. If some part of a traditional steering system has problems, in many cases the driver can take corrective action. But if the car is driving autonomously and the electronic steering system fails, there may not be time for a human to react. In some vehicles, there won’t even be manual controls available at all.

The latest news I saw on ISO 26262 was an announcement that the IDesignSpec Suite of software products from Agnisys has been certified to meet this standard. I wasn’t quite sure what this means and why it matters to chip designers, so I had one of my periodic chats with Agnisys CEO and founder Anupam Bakshi. He started by noting that ISO 26262 is not specific to self-driving cars, or even to cars in general, because it also applies to trucks, buses, heavy equipment, and more. It spans quite a wide range of vehicles and is important to several industries. Of course, the more safety-critical the application, the more the standard matters.

Anupam explained that the ISO 26262 document is huge, with many sections covering diverse topics related to the way that vehicular electronic systems and subsystems are designed and verified. One of these topics involves the electronic design automation (EDA) tools used by engineers to develop the arrays of sensors, chips distributed throughout the frame, complex wiring harnesses, and sophisticated central processors in modern automobiles. The standard mandates that these tools be qualified to ensure that that they don’t introduce errors in the design or fail to catch errors during verification.

This sounds like a significant burden on car companies, and Anupam noted that it indeed can be. However, it turns out that an EDA vendor has the option to qualify its own tools and minimize the effort required by its customers. The car designers don’t just take the vendor’s word for it; there’s an entire ecosystem of testing organizations that do extensive investigation of tools, tool flows, and the processes and people used to develop them. One of the most highly regarded such organizations is TÜV SÜD, which provides testing, inspection, and certification solutions worldwide for a number of important standards.

That’s what this announcement is all about. TÜV SÜD has certified that the Agnisys software products and development flow have achieved the stringent tool qualification criteria defined by ISO 26262. Anupam filled in some more details for me. Agnisys is certified to meet any Automotive Safety Integrity Level (ASIL) in the standard. Agnisys is also certified as meeting IEC 61508, a fundamental industrial functional safety standard that underlies ISO 26262 for vehicles and corresponding safety standards for several other industries.

Anupam read me the wording on the certificate, which includes the statements “qualified to be used in safety-related software development according to ISO 26262” and “suitable to be used in safety-related development according to IEC 61508.” I asked him how much effort it took to achieve this level of qualification, and he said that it was quite an involved procedure. The process of certification by TÜV SÜD included a series of audits of the Agnisys organization and tool development processes in addition to the assessment of the tools themselves. The evaluation spanned such topics as:

  • Software development process
  • Quality assurance (QA) measures
  • Configuration and release management
  • Product verification and validation
  • Customer support
  • Bug reporting procedures
  • Company “safety culture”

So why is this important for the users of Agnisys tools? The certification means that developers of intellectual property (IP) and complex system-on-chip (SoC) devices using application-specific integrated circuit (ASIC) or field-programmable gate array (FPGA) technology do not have to take any additional steps to qualify or certify the Agnisys products in their flow. Agnisys provides the IDesignSpec Tool Qualification Kit (TQK) that users can apply directly to the tool evaluation step required by ISO 26262. This saves a big chunk of time and effort in the IP or chip development process. Using pre-qualified tools makes it easier to satisfy automotive system designers who insist that their silicon suppliers meet the standard.

I asked Anupam whether he already has customers designing automotive chips, and he said yes, including huge supercomputer-class artificial intelligence (AI) processors for autonomous vehicles. He noted that the qualification covers the full IDesignSpec Suite, with twelve products specifically called out on the certificate. He closed by saying that he was really proud of his team for delivering such high-quality products and successfully completing the rigorous inspection and assessment process. I encourage everyone doing safety-critical designs to find out more at https://www.agnisys.com/iso-26262-compliance/.

Also read:

DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development

AI for EDA for AI

What the Heck is Collaborative Specification?


Can Intel Catch TSMC in 2025?

Can Intel Catch TSMC in 2025?
by Scotten Jones on 04-11-2022 at 6:00 am

Slide6

At the ISS conference held from April 4th through 6th I presented on who I thought would have the leading logic technology in 2025. The following is a write up of that presentation.

ISS was a virtual conference in 2021 and I presented on who currently had logic leadership and declared TSMC the clear leader. Following that conference, I did a lot of calls for investment firms, and I was often asked when Intel would catch TSMC, my answer was unless TSMC stumbled, never.

A year later the foundries are stumbling, and Intel is accelerated, can Intel catch up?

I reviewed some Intel history, discussed their leadership throughout the 2000s and then how in the 2010s they began to fall behind, I discussed why I thought this happened.

I have previously published on Intel’s issues here.

The bottom line is 2014 through 2019 Samsung and TSMC each introduced 4 nodes while Intel introduced 2, the Intel nodes were bigger individual density jumps but when you chain together the 4 foundry jumps, they increased density more than Intel and took the lead. Figure 1. summarizes this.

Figure 1. Foundries Versus Intel in the 2010s.

Figure 1 only illustrates the “nodes” from Intel, they weren’t standing still, for 14nm they released 5 versions all with the same density but with progressively improving performance and for 10nm they released 4 versions, once again with the same density but improving performance (note the last version has now been renamed 7nm).

By 2020 Samsung and TSMC both had 5nm in production and compared to Intel 10nm they are denser processes. TSMC had taken a lager jump from 7nm to 5nm then Samsung and was the clear leader with the densest process, smallest SRAM cell size and the industries first silicon germanium FinFET. Figure 2. summarizes this.

Figure 2. 2020 Comparison.

In 2021 the foundries have slowed down.

Samsung 3nm has encountered yield issues and we believe in 2022 their 3GAE (early) process will be used almost exclusively for internal products with 3GAP (performance) being released to external customers in 2023. Samsung chose to go to Horizontal Nanosheets (HNS) for 3nm (a type of gate all around process Samsung calls Multibridge). I believe HNS production issues are still being worked out and Samsung’s interest in being first to HNS has led to delays and poor yields.

TSMC did risk starts of their FinFET based 3nm process in 2021 but production is now pushed to late 2022 with products in the industry in 2023. In 2019 TSMC had risk starts of 5nm and by late 2020 iPhones were shipping with TSMC 5nm parts, for 3nm we won’t see iPhones until 2023. TSMC has also reduced the density for this process from an original 1.7x target to ~1.6X with reduced performance targets.

While Samsung and TSMC were experiencing delays, Intel announced, “Intel Accelerated”, an aggressive roadmap of 4 nodes in 4 years. This is truly accelerated when you consider 14nm took 3 years and 10nm took 5 years. I was frankly skeptical of this when it was announced but at the recent investors event Intel is pulling in the most advanced 18A process from 2025 to 2024!

Our view from now to 2025 is as follows:

2022 – Intel 4nm process, Intel’s first EUV use with a 20% performance improvement over 7nm. Intel had formerly talked about a 2X density improvement for this generation but is now just saying a “significant density improvement”, we are estimating 1.8X. Samsung 3nm will likely be for internal use only with a 1.35X density improvement, 35% better performance at the same power and 50% lower power at the same performance. The density improvement is not very impressive but the performance and power improvements are, likely due to adoption HNS. TSMC 3nm is FinFET based and will provide an ~1.6X density improvement with 10% better performance at the same power and 25% lower power at the same performance.

2023 – Intel 3nm process with 18% better performance, denser libraries and more EUV use. We estimate a 1.09X density improvement making this more of a half node. Samsung 3GAP should be available to external customers and TSMC 3nm parts should appear in iPhones.

2024 – in the first half Intel 20A (20 angstrom = 2nm) process is due with a 15% performance improvement. This will be Intel’s first HNS (they call it RibbonFET) and they will also introduce back side power delivery (they call this PowerVia). The backside power delivery addresses IR power drops while making front side interconnect easier. We are estimating a 1.6X density improvement. In the second half of 2024 Intel’s 18A process is due with a 10% performance improvement. We are estimating a 1.06X density improvement making this another half node. This process has been pulled in from 2025 and Intel says they have delivered test devices to customers.

2025 – Samsung 2nm is due in late 2025, we expect it to be a HNS and because it will be Samsung’s third generation HNS (counting 3GAE as the 1st generation and GAP as the 2nd generation) and their previous generations have been relatively less dense we are forecasting a 1.9X density jump. TSMC has not announced their 2nm process yet other than to say they expect to have the best process in 2025. We may see 2nm in 2024 but for now we have it placed in 2025, we expect a HNS process and are estimating a 1.33X density improvement. We believe the density improvement will be modest because it is TSMC’s first HNS and because the 3nm process is so dense that further improvements will be more difficult.

Figure 3 illustrates how Intel may “flip the script” on the foundries by doing 4 nodes while the foundries do 2.

Figure 3. Density jumps.

We can now look at how Intel, Samsung, and TSMC will compare in density out to 2025. We also added IBM’s 2nm research device based on their 2nm announcement. Figure 4. presents both density versus year and node.

Figure 4. Transistor Density Trends.

 From figure 4 we expect TSMC to maintain the density lead through 2025.

The most complex part of our analysis is illustrated in figure 5 where we compare performance. It is very difficult to compare processes to each other for performance without having the same design run on different processes and this rarely happens. The way we generated this plot is as follows:

  • The Apple A9 process was run in both Samsung 14nm and TSMC 16nm and Tom’s hardware found the same performance for both versions, we have normalized performance at this node to 1 for both Samsung and TSMC.
  • From the 14/16nm node through 3nm we have used the companies announced performance improvements to plot relative performance. For 2nm we have used our own projections.
  • We don’t have any designs that ran on Intel processes and either Samsung or TSMC. However, AMD and Intel both make X86 microprocessors and AMD microprocessors on TSMC 7nm process have competed with Intel 10nm Superfin processors with similar performance and we have set Intel 10SF to the same performance as TSMC 7nm. This is not ideal and assumes that both companies have done an equally good job on design but is the best available comparison. We have then scaled all the other Intel nodes from the 10SF based on Intel’s announcements.
  • Once again, we have place IBM’s 2nm on this chart based on their 2nm announcement.

Figure 5. Relative Performance Trends.

 Our analysis leads us to believe Intel may take the performance lead both on a year basis and a node basis. This is consistent with Intel’s stated goal of taking the “performance per watt lead”. Assuming TSMC is referring to density their statement that they will have the best process in 2025 could also be true.

In conclusion we believe Intel has been able to significantly accelerate their process development at a time when the foundries are struggling. Although we don’t expect Intel to regain the density lead over the time period studied, we do believe they could retake the performance lead. We should get another good read on progress by the end of 2022 when we see whether Intel 4nm comes out on time.

Also Read:

TSMC’s Reliability Ecosystem

The EUV Divide and Intel Foundry Services

Intel Discusses Scaling Innovations at IEDM

Samsung Keynote at IEDM


The ESD Alliance CEO Outlook is Coming April 28 –– Live!

The ESD Alliance CEO Outlook is Coming April 28 –– Live!
by Bob Smith on 04-10-2022 at 10:00 am

CEO Outlook Image

It’s not often our community is able to attend an in-person discussion where executives share their insights on industry trends, especially over the past two years as the pandemic swept across the globe.

Well, that’s about to change and I suggest you start jotting down questions as the ESD Alliance plans its first in-person CEO Outlook in three years. We’re featuring five experienced executives –– Dr. Anirudh Devgan of Cadence Design Systems, Niels Fache from Keysight Technologies, Aki Fujimura of D2S, Siemens EDA’s Joe Sawicki and Simon Segars of Arm. Ed Sperling of Semiconductor Engineering leads the discussion. Audience participation will be encouraged via a Q&A session.

Keysight is our co-host Thursday, April 28, at Agilent Building 5 at 5301 Stevens Creek Blvd. in Santa Clara, Calif., beginning at 5:30pm with a networking reception with food and beverages. The CEO Outlook panel begins at 6:30pm. It is free for ESD Alliance and SEMI members. Pricing for non-members is $49 per person. Click here for registration information.

The ESD Alliance Annual Membership meeting will be held prior to the start of the CEO Outlook beginning at 5pm at the same location. Non-members are welcome to attend if they purchase a ticket for the CEO Outlook.

The CEO executive panel is a long-standing yearly tradition that started with the EDA Consortium (EDAC) before our charter was expanded to include the entire system design ecosystem and we changed our name to the Electronic System Design (ESD) Alliance.

The wait is over and I look forward to seeing you again in person, and recommend you register today. Our CEO Outlook is a popular event and we’re expecting a big crowd. Registration details can be found here.

About the ESD Alliance
The ESD Alliance serves as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. We have an ongoing series of networking and educational events like the CEO Outlook, programs and initiatives. Additionally, as a SEMI Technology Community, ESD Alliance companies can join SEMI at no extra cost.

To learn more about the ESD Alliance, visit the ESD Alliance website. Or contact me at bsmith@semi.org if you have questions or need more information.

Engage with the ESD Alliance at:
Website: www.esd-alliance.org
ESD Alliance Bridging the Frontier blog
Twitter: @ESDAlliance
LinkedIn
Facebook

Also read:

Key Executive to Discuss Latest Chip Industry Design Trends at SEMI ESD Alliance 2022 CEO Outlook April 28

Nominations Open for Phil Kaufman Hall of Fame Sponsored by ESD Alliance and IEEE CEDA

Cadence’s Dr. Anirudh Devgan to be Honored with the 2021 Phil Kaufman Award on May 12


Chip Shortage Killed the Radio in the Car

Chip Shortage Killed the Radio in the Car
by Roger C. Lanctot on 04-10-2022 at 6:00 am

Chip Shortage Killed the Radio in the Car

“In my mind and in my car, we can’t rewind we’ve gone too far.” – “Video Killed the Radio Star” – The Buggles

I discovered within days of driving home my new BMW X3 last fall that I was a victim of the much ballyhooed chip shortage. Among the features “deleted” from my car were “Passenger Lumbar,” “BMW Digital Key,” and “SiriusXM and HD.”

To its credit BMW and the dealer detailed the deletions on the vehicle’s Monroney label. Sadly, the SiriusXM rep I called to help me find and activate the service was unaware that the necessary hardware was not available as it should have been according to my coded VIN number. (Rumor has it that BMW intends to provide an aftermarket solution – but I’m not holding my breath.)

The experience was startling. Was BMW considering removing the car radio or maybe just doing without digital? Are they thinking that life would be so much simpler if they could dispense with the radio and all the testing and all the related cabling, semiconductors, and those damn antennas.

In fact, delivering an interference-free AM experience in an EV has become sufficiently challenging for some OEMs that they have, in some isolated cases, chosen to simply do without. We expect the radio in an internal combustion vehicle – but maybe not in an EV?

We take it for granted. But who is to say that there must be a radio in the car?

Has the time arrived when we need a radio mandate? Why didn’t my dealer see fit to alert me to the missing SiriusXM and HD Radio hardware? Was the dealer afraid it might be a deal breaker?

Is it time for the FCC to step in and subsidize the chip making resources of semiconductor companies such as NXP, Texas Instruments, and ST Micro? Do we need a strategic SiriusXM/HD Radio semiconductor reserve – to be tapped into in times of supply chain crises?

There is a clear public interest for requiring access to free over-the-air broadcast content in cars – especially in times of severe weather, terrorist attacks, road closures, bridge collapses! You might lose your cell service, but you can always find a channel on the radio. And, of course, the Emergency Broadcast System.

Within a month or two of taking delivery of my BMW I had the opportunity to take in the hybrid radio experience delivered in the newest EVs from Mercedes Benz – the so-called MBUX. Word is that Mercedes Benz has also been hit by radio chipset shortages but is withholding delivery of those chip challenged vehicles until the content can be installed.

My chip-less BMW has me listening to the radio without the benefit of rich HD Radio metadata and visual content and, of course, without SiriusXM, Howard Stern remains out of reach. A new in-car content consumption experience infused with visual metadata and integrated with recommendation engines, search, and personal profiles is arriving in the market only slightly delayed by the chip shortage.

Presumably BMW and others will reverse their “deletions” in recognition of the enduring value proposition of radio in the car.

Also read:

A Blanche DuBois Approach Won’t Resolve Traffic Trouble

Auto Safety – A Dickensian Tale

No Traffic at the Crossroads


Podcast EP70: A Review of EDA and IP Growth For 2021 with Dr. Walden Rhines

Podcast EP70: A Review of EDA and IP Growth For 2021 with Dr. Walden Rhines
by Daniel Nenni on 04-08-2022 at 10:00 am

Dan is joined by Dr. Walden Rhines in his capacity as Executive Sponsor for the SEMI Electronic Design Market Data Report. Wally provides an overview of the most recent report covering all of 2021. Spoiler alert: it was a record-breaking year in many areas.

Dan and Wally explore the details behind the numbers and what it may mean for the coming year of semiconductor growth. Various events around the world are also discussed with regard to their possible impact on the industry.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Esko Mikkola of Alphacore

CEO Interview: Dr. Esko Mikkola of Alphacore
by Daniel Nenni on 04-08-2022 at 6:00 am

Esko Mikkola Polo

 CEO’s background: what led to your current organization?
Since an early age, exceeding expectations has driven me to succeed in my areas of interest. While enlisted in the Finnish Army, I was selected, trained, and short-listed for the country’s astronaut training program before it was unfortunately dissolved.  A major realized goal was to qualify and represent Finland in the 2004 Athens Olympics. In the US, I won the NCAA javelin championship in 2013. This rigorous athletic discipline complemented a similar intense academic drive to extensively research advanced analog electronics with an emphasis on radiation tolerance, reliability modeling and characterization structures. While completing a PhD in Electrical Engineering, I had a focus on evaluating analog IC modelling correlation issues affecting advanced small geometry technologies. During the course of this work, it became apparent that a significant unserved opportunity existed to deliver ultra-low power, high-speed, and high-resolution data converter solutions. I was particularly interested in the mixed signal aspects of conversion architectures especially as applied to analog circuit functionality and reliability when using emerging very small nanometer-scale geometries. Those experiences led to managing successful research and development programs where the ideas were put into practice. Once the technology and opportunity were validated, I made the entrepreneurial leap as the Founder and CEO of Alphacore Inc.

Please tell us about Alphacore and its product offering.

The Alphacore design team has extensive experience in delivering products and solutions for a diverse customer base. We offer products and services that often far exceed the performance specifications of what is available in the market today. These solutions fulfill the needs of a broad range of leading-edge commercial, scientific, and aerospace communication applications. For example, Alphacore’s novel data converter architecture, HYDRATM (Hybrid DigitizeR Architecture), allows us to deliver a best-in-class family of RF data converters. The resolution in this family is up to 14bits with analog bandwidths as high as 25Ghz, while consuming milliwatts of power. These hybrid architecture innovations are key to delivering the unbeatable specs found in our Gigasamples per second and milliwatts of power RF data converter library.

From our foundation in RF Data Converters, we have built a transceiver architecture, SMARTTM (Scalable Macro Architecture for Receiver and Transmitter), that is delivered to customers as scalable macros. This innovative and multi-core macro approach is ideal for phased arrays, beam forming, Massive-MIMO, 5G/6G applications that offer significant advantages compared to other approaches. Starting with our RF data converter family, we cover selectable data converter specs for multi-channel arrays available from a few hundred MHz to 20 Gigasamples per second. Alphacore’s innovative hybrid architecture enables these arrays to be configurable with best-in-class area and power efficiency. Our scalable macro transceiver architecture can be configured with on-board PLL’s, with selectable I/O formats, and with or without SerDes.  This scalable approach enables optimum performance, maximum configurability, with minimal customization and design risk.

What makes Alphacore Intellectual Property/products unique?

Continuing with the theme from the previous questions, there are really two things that make our products unique. First, and probably most important is that the team has invented and now productized HYDRATM and SMARTTM architectures that solve the area, performance, and power challenges of not just the data converters, but of the macros required at the RF system level.

Alphacore is growing fast, and leverages our small company agility to deliver innovative architectures and remarkable system performance for our partners and customers. Our ADCs have the lowest power in their resolution and bandwidth class. Additionally, we are a member of the prestigious GlobalFoundries FDX Network that recognizes these industry demonstrated design accomplishments. We continue to blaze new trails in leading edge process technologies such as GF’s 22FDX 22nm fully depleted silicon on insulator (FD-SOI) CMOS process, our customers are assured their unique solution is developed using rigorous discipline approved by an internationally respected partner.

For example, our exciting IP cores with World-class performance levels, using low power FD-SOI technologies, offer proven solutions that include data converter products. We alone offer reduction in power that is 10X lower than nearest commercial competitors, wide bandwidth, and innovative automatic background calibration for spur removal all with Gigasample-class conversion rates. The outright performance increase of our cores offers a unique market disrupting position with established competitor data conversion vendors. Significantly, these have enabled cost-effective IC readout solutions with frame rates all the way up to 600,000 frames per second, and more reliable control electronics for harsh environments.

What customer challenges/competitive positioning are you addressing?

Our commercial customers are in market segments that are driven by frequently upgraded specifications and new advanced product demands, such as emerging 5G communications standards, and Advanced Driver-Assistance Systems that demand high quality solutions at low-cost. Alphacore delivers data converters with the best performance figure of merit (power dissipation, sampling rate, effective number of bits, and signal-to-noise and-distortion ratio parameters), imaging solutions with the highest resolutions and frames per second available, and power management products such as the most power efficient DC-DC converters.

As mentioned before Alphacore’s new products enable market disrupting consumer priced solutions with unbelievable performance and pricing in monolithic packages that replace expensive and less reliable hybrid multi-chip modules. Alphacore’s business model provides our customers with optimum tradeoff selection for power and performance of data converters to propel faster time-to-market for their system designs that are simpler, smaller and more affordable, with available assistance from our dedicated technical support during the process.

Furthermore, competitive versions of these products can be delivered with characterized radiation tolerance specifications.

Which markets most benefit from Alphacore IP?

Our technical team includes disciplined and seasoned “Radiation-Hardened-By-Design” (RHBD) experts; however, we specialize in designing commercial high-performance solutions for niche needs of demanding segments including 5G communications. Alphacore’s very-low-power and high-speed data converter IP design blocks are ideally suited to direct RF sampling architectures necessary for advanced communication standards including 5G, LTE and WiFi and their base stations.

Also, our library of IP facilitates the fast development pace of new technology in other major markets for automotive sensors, aerospace, defense, medical imaging, homeland security, scientific research, or electronics for High Performance Computing (HPC) in space environments.

All of these market segments drive next generation products, that similar to Moore’s Law, seem to demand 2x, 3x, multiples of performance or resolution increases at similar multiples of economies of scale and pricing. Significantly, Alphacore’s product roadmap with complementing photolithographic breakthroughs, novel scaling architectures, etc., is ideal whether customers request low-cost high-performance IP or much larger scaled IP versions with massive increases in data or image resolution.

What are Alphacore’s Goals?

The company is driving towards recognition of its strengths and expertise, and commercial business development of its products and licensable IP in a handful of strategic business opportunity-driven areas that directly focus on Alphacore’s clear strengths. I would characterize these critical growth areas for us as 1) High-performance Analog, Mixed-Signal and RF Solutions, 2) Advanced CMOS Image Sensor, Camera System and ROIC products and services, 3) Radiation Hardening expertise for applications with Harsh Environments, and 4) Emerging Solutions for 5G, SATCOM, Scientific Research, Automotive, Defense, Space.

Also Read:

Aerial 5G Connectivity: Feasibility for IoT and eMBB via UAVs

A Tour of Advanced Data Conversion with Alphacore

Analog to Digital Converter Circuits for Communications, AI and Automotive


Design to Layout Collaboration Mixed Signal

Design to Layout Collaboration Mixed Signal
by Tom Simon on 04-07-2022 at 10:00 am

Cliosoft integration with Custom Compiler

When talking about today’s sophisticated advanced node designs it’s easy to first think about the digital challenges. Yet, the effort to design the needed analog and mixed signal blocks for them should not be underestimated. The need for high speed clocks, high frequency RF circuits and high bit rate IOs makes the analog portions, particularly on FinFET nodes, complex and difficult. Analog design has in reality maintained its importance to SOC success over time. Indeed, the facts show growing numbers of analog and AMS circuit and layout designers working in teams around the world. Collaboration within and among these teams has become a primary concern.

There is a changing analog tool landscape too. Custom Compiler from Synopsys is making significant inroads into the previously monolithic custom IC design market. Synopsys reports that there are now nearly 200 companies using Custom Compiler. This, in conjunction with Synopsys’ own internal usage for the development their commanding analog IP portfolio, means that there are literally thousands of seats in use today and the numbers are growing. In a recent webinar by Synopsys and Cliosoft, a leading design data management solution provider, Synopsys cites increased design efficiency as the key to their on-going success. The webinar titled “Enabling Effective Design & Layout Collaboration for Next Generation Analog and Mixed Signal Designs” touts the efficiencies added by integration with Cliosoft for design collaboration.

One might assume that this is just about checking files in and out so they can be edited safely. However, the webinar goes into detail about some pretty important aspects of the integration of Cliosoft SOS and Synopsys Custom Compiler. They specifically highlight the signoff review features. It’s important to note that circuit designers and layout engineers working on the same project might be sitting halfway around the world from each other. The integration described in the webinar offers sophisticated features so that one team can add notation to areas within a design, including highlighting specific areas of the design graphically to help communicate changes that might be needed. The Cliosoft SOS integration allows this collaboration activity right inside of the Custom Compiler user interface and directly on the design.

Cliosoft integration with Custom Compiler for Collaboration

The webinar has an overview that shows how Cliosoft SOS capabilities can be used for design/layout collaboration and closure. The four elements of this are managing the design, facilitating collaboration, offering insight through analysis and finally making reuse possible.

Design data management includes revision control as you would expect. It offers release and variant management. Data security and access controls are provided as well. It also contains features that help to optimize network and disk storage usage.

The collaboration element covers support for remote cache servers with automatic synchronization. Underlying this are mechanisms that provide secure and efficient data transfer between sites.

The analysis features can produce design audit reports. It can also be used to spot schematic/layout differences. There are also reports on the changes made between releases or over time on designs. All of this helps manage and track the design process.

The fourth category is reuse, while long sought after, has in practice proven challenging. Cliosoft SOS helps companies effectively locate and reuse designs. Customers can create their own IP catalog. When there are fixes and releases to IP in the catalog, they are propagated so everyone stays up to date. The net effect is to increase productivity.

The webinar covers examples of each of these elements. Also, it includes a demo that shows how Cliosoft SOS is used directly inside of the Custom Compiler GUI for several of the tasks mentioned above to improve collaboration. The full webinar can be viewed on the Synopsys website.

Also read:

Synopsys Tutorial on Dependable System Design

Synopsys Announces FlexEDA for the Cloud!

Use Existing High Speed Interfaces for Silicon Test

 

 


Intel Best Practices for Formal Verification

Intel Best Practices for Formal Verification
by Daniel Nenni on 04-07-2022 at 6:00 am

formal dynamic verification comparison

Dynamic event-based simulation of RTL models has traditionally been the workhorse verification methodology.  A team of verification engineers interprets the architectural specification to write testbenches for various elements of the design hierarchy.  Test environments at lower levels are typically exercised then discarded, as RTL complexity grows during model integration.  Methodologies have been enhanced to enable verification engineers to generate more efficient and more thorough stimulus sets – e.g., biased pseudo-random pattern generation (PRPG); portable simulation stimulus (PSS) actions, flows, and scenarios.

Yet, dynamic verification (DV) is impeded by the need to have RTL (or perhaps SystemC) models thoroughly developed to execute meaningful tests, implying that bug hunting gets off to a rather late start.

Formal verification (FV) algorithms evaluate properties that describe system behavior, without the need for dynamic stimulus.  These are three facets to FV:

  • assertions

Model assertions are written to be exhaustively proven by evaluating the potential values and sequential state space for specific signals.

  • assumptions

Known signal relationships (or constraints) are specified on model inputs, to limit the bounds of the formal assertion proof search.

  • covers

A covergroup defines an event (or sequence) of interest that should eventually occur, to record the scope of verification testing – an example would be to ensure proper functionality of the empty and full status of a FIFO stack.

A common semantic form for these FV properties is SystemVerilog Assertions (SVA). [1]

The properties in an assertion to be exhaustively proven have a range of complexities:

  • (combinational) Boolean expressions
  • temporal expressions, describing the required sequence of events
  • implication: event(s) in a single cycle imply additional event(s) much occur in succeeding cycles
  • repetition: an event much be succeeded by a repetition of event(s)

When a formal verification tool is invoked to evaluate an assertion against a functional model, the potential outcomes are:

  • proven
  • disproven (typically with a counterexample signal sequence provided)
  • bounded proven to a sequential depth, an incomplete proof halted due to resource and/or runtime limits applied as the potential state space being evaluated grows

FV offers an opportunity to find bugs faster and improve the productivity of the verification team.  Yet, employing the optimal methodology to leverage the relative strengths of both formal verification and dynamic verification (with simulation and emulation) requires significant up-front project planning.

At the recent DVCON, Scott Peverelle from the Intel Optane Group gave an insightful talk on how their verification team has adopted a hybrid FV and DV strategy. [2] The benefits they have seen in bug finding and (early) model quality are impressive – part of the initiative toward “shift left” project execution.  The rest of this article summarizes the highlights of his presentation.

Design Hierarchy and Hybrid FV/DV Approaches

The figure above illustrates a general hierarchical model applied to a large IP core – block, cluster, and full IP.  The goal is to achieve comprehensive FV coverage for each block-level design, and extend FV into the cluster-level as much as possible.

The expectation is that block-level interface assertions and assumptions will be easier to develop and verify.  And similarly, end-to-end temporal assertions involving multiple interfaces across the block will have smaller sequential depth, and thus have a higher likelihood of achieving an exhaustive proof.  Scott noted that the micro-architects work to partition block-level models to assist with end-to-end assertion property generation.

Architectural Modeling and FV

Prior to focusing on FV of RTL models, a unique facet of the recommended hybrid methodology is to create architectural models for each block, as depicted below.

The architectural models can be quite abstract, and thus small and easy to develop.  The models only need to represent enough behavior to include key architectural features.

A major goal is to enable the verification team to develop and exercise the more complex interface and end-to-end FV assertions, and defer work on the properties self-contained within the block functionality.  These architectural models are then connected to represent a broader scope of the overall hierarchy.

Although the resource invested in writing architectural model may delay RTL development, Scott highlighted that this FV approach expedites evaluation of complex, hard-to-find errors, such as for:  addressing modes and address calculation; absence of deadlock in communication dependencies; verification of the order of commands; and, measurement of the latency of system operations.

Formal Verification of RTL

The development of additional FV assertions, assumptions, and covers expands in parallel with RTL model coding, with both the micro-architects and verification team contributing to the FV testbench.

The recommended project approach is for the micro-architects to focus initially on RTL interface functionality; these RTL “stubs” are immediately useful in FV (except for the end-to-end assertions).  Subsequently, the baseline RTL is developed and exercised against a richer set of assertions and covers.  And, the properties evaluated during the architectural modeling phase are re-used and exercised against the RTL.

As illustrated below, there is a specific “FV signoff” milestone for the block RTL.

The FV results are evaluated – no failing assertions are allowed, of course, and any assertions that are incomplete (bounded proven) are reviewed.  Any incompletes are given specific focus by the dynamic verification testbench team – the results of a bounded search are commonly used as a starting point for deep state space simulation.

Promotion of FV Components to Higher Levels

With a foundation in place for block-level FV signoff, Scott described how the FV testbench components are promoted to the next level of verification hierarchy, as illustrated below.  The FV component is comprised of specific SystemVerilog modules, with assertions and assumptions partitioned accordingly.  (An “FBM” is a formal bus model, focused on interface behavior properties.)

Each FV component has a mode parameter, which sets the configuration of assertions, assumptions, and covers.  In ACTIVE mode, all assertions and covers are enabled; assumptions (in yellow) are used to constrain the property proof.  When promoting an FV component, the PASSIVE mode is used – note that all assume properties are now regarded as assertions to be proven (green).

In short, any block model assumption on input ports need to be converted to assertions to verify (either formally or through dynamic simulation) at the next level of model integration.

Briefly, there is also an ARCH mode setting for each FV component, as depicted below:

If a high-level architectural model is integrated into a larger verification testbench, end-to-end and output assertions become FV assumptions (in yellow), while input assumptions are disabled (in grey).

Additional Formal Apps

Scott highlighted the additional tools that are still needed to complement the formal property verification methodology:

  • inter-module connectivity checks (e.g., unused ports)
  • clock domain crossing (CDC) checks for metastability interface design
  • sequential equivalence checking, for implementation optimizations that introduce clock gating logic
  • X-propagation checks

3rd Party IP Interface Verification

If the design integrates IP from an external source, a FV-focused testbench is a valuable resource investment to verify the interface behavior.  Scott mentioned that end-to-end assertions that are developed to ensure the overall architectural spec correctly matches the 3rd party IP behavior have proven to be of considerable value.  (Scott also noted that there are commercially-available FV testbenches for protocol compliance for industry-standard interfaces.)

Dynamic Verification

Although FV verification can accelerate bug finding, Scott reiterated that their verification team is focused on a hybrid methodology, employing DV for deep state bug hunting, and for functionality that is not well-suited for formal verification.  Examples of this functionality include:

  • workload-based measurements of system throughput and interface bandwidth
  • power state transition management
  • firmware and hardware co-simulation (or emulation)
  • SerDes PHY training

Results

The planning, development, and application of FV components and integration in a verification testbench requires an investment in architecture, design and verification team resources.  Scott presented the following bug rate detection graph as an indication of the benefits of the approach they have adopted.

The baseline is a comparable (~20M gate) IP design, where dynamic verification did not begin in earnest until the initial RTL integration milestone.  The early emphasis on FV at the architectural level captured hundreds of bugs, quite a few related to incorrect and/or unclear definitions in the architectural spec (including 3rd party IP integration).   These bugs would have taken much longer and more verification resource to uncover in a traditional DV-only verification flow.

Summary

Formal property verification has become an integral part of the initiative toward a shift-left methodology.  Early FV planning and assertion development combined with a resource investment in architectural modeling helps identify bugs much earlier.  A strategy for focusing on FV at lower levels of the design hierarchy, following by evaluation of constraints at higher levels of integration, will offer FV testbench reuse benefits.  This is an improved method over lower-level dynamic verification testbench development that is commonly discarded as the design complexity progresses.

The best practices FV methodology recently presented by Intel at DVCON is definitely worth further investigation.

References

[1]   IEEE Standard 1800 for SystemVerilog, https://ieeexplore.ieee.org/document/8299595

[2]   Chen, Hao; Peverelle, Scott;  et al, “Maximizing Formal ROI through Accelerated IP Verification Sign-off”, DVCON 2022, paper 1032.

Also read:

Intel Evolution of Transistor Innovation

Intel 2022 Investor Meeting

The Intel Foundry Ecosystem Explained