RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Package Pin-less PLLs Benefit Overall Chip PPA

Package Pin-less PLLs Benefit Overall Chip PPA
by Tom Simon on 08-19-2021 at 6:00 am

Pin less PLLs from Analog Bits

SOCs designed on advanced FinFET nodes like 7, 5 and 3nm call for silicon-validated physical analog IP for many critical functions. Analog blocks have always been node and process specific and their development has always been a challenge for SOC teams. Fortunately, there are well established and endorsed analog IP companies like Analog Bits that provide high performance analog IP that is ready to use for just about every process. I had a conversation recently with Mahesh Tirupattur, executive VP of sales and marketing at Analog Bits, where we discussed their PLL portfolio. Clocking requirements have grown substantially, and this has led to diversity in their PLL product line.

Mahesh touched upon new PLLs needed for PCIe. Their PCIe Gen3 PHY is based on a ring oscillator and Gen4/5 PLL uses an LC tank. They have also added high performance PLLs for chip-to-chip interfaces that operate at 20GHz for advanced FinFET nodes. As further illustration of the diversity now required in PLLs he pointed to ultra-low power PLLs for IoT and radiation hardened chips for military and space applications. Not only have the number of types of PLLs for specific applications grown, but the sheer number of PLLs needed to provide clocking across and throughout an SOC has increased.

Historically, there have been technical limitations on where PLLs could be located due to their power supply requirements. This, in turn, leads to larger clock distribution networks that added complexity due to a host of factors. Longer clock lines require substantial area due to buffers and inverters necessary to maintain clock signal integrity. They also create noise issues. Switching on larger clocks consumes a significant percentage of SOC power and also can lead to aging related failures. However, placing PLLs closer to where they are needed introduces the need for added clock supply lines and external power pins.

Pin-less PLLs from Analog Bits

Mahesh talked about Analog Bits’ Patented Package Pin-less technology for advanced FinFET nodes that gives designers the freedom to locate PLLs and analog sensors where they are needed without concern for adding additional non-core voltage supply lines. These PLLs only need core supply voltage, so they are free from pad power bump restrictions. The result is lower power, less aging effects, lower pin count, less cross talk and reduced area.

While analog IP has often rightly been considered enabling technology because it provides needed functions, the benefits from Analog Bits’ Package Pin-less offering go far beyond that. It creates ripple effects that improve top line development goals and criteria. It’s impressive how seemingly mundane building blocks can actually make big differences.

Analog Bits has a rigorous program to design and qualify test silicon. They work with a broad range of foundries and have physical IP for many on the most advanced FinFET processes. They offer an attractive no-royalty business model. Mahesh told me that since they were founded in 1996 they have made over one thousand deliveries which have been used to fab literally billions of units. Their website offers much more information on their full line of analog IP, which includes on-chip sensors, PLLs, SerDes and more.

Also Read:

Analog Sensing Now Essential for Boosting SOC Performance

Analog Bits is Taking the Virtual Holiday Party up a Notch or Two

Analog Bits is Supplying Analog Foundation IP on the Industry’s Most Advanced FinFET Processes


Semiconductor Growth to Continue in 2022

Semiconductor Growth to Continue in 2022
by Bill Jewell on 08-18-2021 at 2:00 pm

Aug 2021 comp

The semiconductor market showed powerful growth in 2Q 2021, up 8.3% from 1Q 2021 and up 29% from a year earlier, according to WSTS. Most major semiconductor companies experienced substantial revenue growth in the quarter. The memory companies were especially strong, with 2Q 2021 versus 1Q 2021 revenue (in local currency) up 19.6% for Samsung, 21.5% for SK Hynix, 19.0% for Micron Technology and 11.8% for Kioxia. Samsung passed Intel in 2Q 2021 to regain the top semiconductor supplier ranking. In U.S. dollars the memory companies performed even better, up 22% collectively. The non-memory companies had mixed results. Three had double-digit quarter-to-quarter growth: MediaTek at 16.3%, AMD at 11.8% and Nvidia at 11.3%. However, Intel declined 0.2% and STMicroelectronics declined 0.8%. Collectively the non-memory companies grew 3% from 1Q 2021 to 2Q 2021.

The outlook for 3Q 2021 is generally solid. Of the nine companies providing guidance for the quarter, three expect double-digit growth versus 2Q 2021 (Qualcomm, Micron Technology and NXP Semiconductors) and three are around 7% (AMD, Infineon, and STMicroelectronics). MediaTek is projecting 2.5% growth, Texas Instruments is guiding for no growth, and Intel is calling for a 2.7% decline. Both Texas Instruments and Intel cited capacity limitations.

The powerful momentum in the first half of 2021 (up 24% from the first half of 2020) will carry the semiconductor market to substantial growth for the full year 2021. WSTS just updated their Spring 2021 forecast with final 2Q 2021 data and is now projecting 2021 will be up 25%. In June, IC Insights projected 2021 growth of 24% and Gartner called for 22%. Our latest forecast from Semiconductor Intelligence is 26%.

How much of the momentum of the semiconductor market in 2021 will carry into 2022? The answer is dependent on two factors:

1. When will the current shortages in many segments of the market be resolved so the market is more in alignment with the growth in end equipment?

2. What will be the end equipment growth rates in 2022 compared to 2021?

According to Susquehanna Financial Group, the average lead time for semiconductors is now over 20 weeks, up from 14 weeks at the beginning of the year. Intel CEO Pat Gelsinger expects shortages to begin easing by the end of the year but said it could take one or two years before supply and demand are fully in balance. TSMC CEO CC Wei stated they have increased production of automotive microcontrollers and he expects shortages will be reduced in 3Q 2021.

Once the semiconductor supply catches up with demand, growth in 2022 will be dependent on the economy and the demand for end equipment. The International Monetary Fund (IMF) in July forecast global GDP will increase 6.0% in 2021 as the world recovers from the pandemic. The IMF expects the recovery momentum to carry into 2022 with 4.9% growth, up from the May forecast of 4.4%. Canalys expects the smartphone market to bounce back to a 12% gain in 2021 after a 7% decline in 2020 primarily due to pandemic related manufacturing disruptions. Canalys shows 2022 smartphone growth of 5%, higher than the growth rates in the four years prior to the pandemic. IDC has not updated its PC forecast since May, where it called for an 18% increase in 2021 and a 5% decline in 2022. In 2Q 2021 PC shipments showed a slowing of growth, but the year 2021 should still show a double-digit gain in PC units. IHS Markit projects light vehicle production will be 83 million units in 2021, up 11% from 2020. Vehicle production in 2021 has been limited by semiconductor shortages. 2022 light vehicles should be up a strong 9% as the industry catches up to pre-pandemic production levels.

As shown in the semiconductor market forecast chart, growth will moderate in 2022 after a gain of over 20% in 2021. IC Insights projects the growth rate of the IC market will average 13% for 2022 and 2023. WSTS’ update of its Spring 2021 forecast shows a 10% increase in 2022. Our forecast at Semiconductor Intelligence is for a 15% gain in 2022. Semiconductor market growth of over 10% in 2022 would be healthy compared to the long-term growth rate mid-single digits. Thus, most of the market momentum in 2021 should carry into 2022.


TSMC Wafer Wars! Intel versus Apple!

TSMC Wafer Wars! Intel versus Apple!
by Daniel Nenni on 08-18-2021 at 10:00 am

Intel TSMC SemiWiki

The big fake news last week came from a report out of China stating that TSMC won a big Intel order for 3nm wafers. We have been talking about this for some time on SemiWiki so this is nothing new. Unfortunately, the article mentioned wafer and delivery date estimates that are unconfirmed and from what I know, completely out of line. From there the media created a frenzy pitting Intel against Apple and AMD in a war of wafers as a desperate attempt to get cheap clicks:

Intel locks down all remaining TSMC 3nm production capacity, boxing out AMD and Apple by John Loeffler Tech Radar

Intel Grabs Majority of TSMC’s 3nm Capacity by Hassan Mujtaba WCCftech

Intel Has Reportedly Cornered TSMC 3nm Chip Capacity by Paul Lilly HotHardWare

Apple secures majority of TSMC’s 3nm production capacity over Intel by Sean Gizmo China

And now we have wanna be influencers on Seeking Alpha and LinkedIn repeating this false narrative ad nauseum.

First let’s look at the TSMC/Apple backstory. Apple came to TSMC from Samsung at 20nm for the iPhone 6 which was the best phone of it’s time in my opinion. Apple first partnered with Samsung when founding the iPhone franchise but switched to TSMC after Samsung came out with their own line of smartphones that competed with their #1 foundry customer. A giant IP theft law suit followed which cemented Apple’s relationship with TSMC because as we all know, “TSMC is the trusted foundry and does not compete with customers”. As the story goes, Apple first approached Intel to make their SoCs but was rebuked, a decision that Intel greatly regrets.

The TSMC/Apple relationship disrupted the semiconductor manufacturing business by introducing what I call process technology half steps. Instead of following Moore’s law with a new process every two to three years TSMC released a new process version every year timed with the Apple iPhone Fall launch. In order to do that TSMC and Apple closely collaborate on a process technology optimized for the Apple SoCs which is frozen at the end of each year for high volume production in the second half.

The first half step was 20nm to 16nm. TSMC 20nm first introduced double patterning which was no small feat for chip designers. Next TSMC added FinFETs (another design challenge) to 20nm creating 16nm. TSMC uses the same fabs for the half steps which saves time and resources and promotes advanced yield learning for smoother process ramping. TSMC 16nm was further optimized for a 12nm version.

TSMC 10nm (N10) was the next process node which was followed by the N7 half step. Partial EUV was added to N7 (N7+) as another half step. N7+ was further optimized for N6.

TSMC N5 followed with more EUV and was further optimized for N4 which is what is in the iPhone products that will be launched next month (Apple’s version of N4).

TSMC N3 was officially launched at the TSMC Technology Symposium 2021 with even more EUV which will be in volume production starting in 2H2022 (Apple). As compared to N5, N3 will provide:

  • +10-15% performance (iso-power)
  • -25-30% power (iso-performance)
  • +70% logic density
  • +20% SRAM density
  • +10% analog density

In the 10 years that TSMC and Apple have been working together an iPhone launch has never been missed and Apple has always been first to the new process technology. This collaborative half step process methodology is the reason TSMC and Apple have executed flawlessly. As a result, Apple is TSMC’s #1 customer and closest partner and I do not see that changing anytime soon, if ever, absolutely.

I first heard word of Intel having the TSMC N3 PDK in the first part of 2020 which was a bit of a surprise. Intel is a long time TSMC customer due to acquisitions but not for native Intel products. I confirmed it with multiple sources inside the ecosystem and started writing about it shortly thereafter.

What I was told later is that Bob Swan signed the N3 deal with TSMC due to the delays in Intel 10nm and 7nm to motivate Intel manufacturing to get those processes out as planned. TSMC then increased CAPEX to build the additional N3 capacity required to satisfy the Intel wafer agreement.

To be clear, wafer agreements are signed 2-3 years before the chip makes it into HVM and TSMC can build fabs faster than that so there will be no N3 shortages for anyone who signed a wafer agreement (apple, AMD, NVIDIA, QCOM, etc…). If they need more chips than what they signed up for, which happens, there may be shortages. This is how TSMC and the foundry business works. It’s all about the wafer agreements.

As an interesting side note, Pat Gelsinger and his new IDM 2.0 push has made the Intel/TSMC relationship all the more interesting. Pat insists that Intel will manufacture the majority of their products internally. I understand that 50.001% is technically a majority but that still seems low given the Intel TSMC N3 wafer agreement, the process delays Intel is currently experiencing, and the competitive pressures of AMD.

We covered the Intel Accelerated event last month and will be covering the Intel Architecture Day as well. Hopefully Intel’s new process and product initiatives are successful because competition is what keeps semiconductor technology moving forward and cost effective.


Controlling the Automotive Network – CAN and TSN Update

Controlling the Automotive Network – CAN and TSN Update
by Bernard Murphy on 08-18-2021 at 6:00 am

automotive networks

Cars are hotbeds of systems innovation. I’ve been fortunate to be asked to write about many of these areas, from the MEMS underlying sensors to ISPs and radars, intelligent imaging and sensor fusion. And many aspects of design for safety within the SoCs around a car. But I haven’t written much about the networks connecting these devices. One piece back in 2017 was very informative for me at least. After talking to Nikos Zervas CEO at CAST plus some further research, I have an update.

A Babel of technologies

Automakers are eager to advance but they’re also conservative. Protocols I wrote about in that earlier blog are still very much around, judging by product offerings from companies like NXP and Renesas. LIN – a low-cost single wire technology to control windows, door locks, mirrors and power seats. Then there’s CAN – primarily for powertrain functions, and now CAN is advancing to CAN-FD supporting 5-8Mbps. And CAN-XL up to 10Mbps. All fully interoperable.

FlexRay is still very much around, touted as the deterministic technology required for safety guarantees in critical systems like airbags and ABS. Some suggest FlexRay as an eventual replacement for LIN and CAN, though at present cost is a barrier. Automakers seem content to stick with all three technologies for now.

I can’t find mention of MOST in a quick check of recent product literature so I’m assuming all the advances in multimedia and other high bandwidth traffic are moving to automotive Ethernet.

What is automotive Ethernet?

Ethernet is such a well-established technology that it might seem strange that it didn’t migrate to our cars long ago. It did – to on-board diagnostics and audio/video support – but not to safety and information-related functions. The reason is lack of deterministic latency. We may tolerate buffering on our phones and TVs, but there’s no room for delay in detecting a potential collision and not much more for a delayed direction update from the nav. To solve that problem, the IEEE 802.1 group formed a Time Sensitive Networking (TSN) group back in 2012. This organization has spun a collection of sub-standards and profiles to address different needs, with a mandate to support scalability. TSN now includes a synchronization standard, multiple traffic shaping options, frame preemption for express traffic, frame replication to support redundancy and filtering/policing options. You can watch an informative video HERE.

You may know that in network architecture automakers are moving towards a domain-based concept. Instead of proliferating MCU controllers around different functions they will be grouped into domains – Body, Chassis and Powertrain – to reduce costs and power. Over time, the expectation is that grouping will move more towards physical zones in the car no doubt for further performance, cost and power management. That may drive further consolidation in protocols, with Ethernet/TSN a strong candidate.

Who builds these interface solutions?

Today at least, US and European OEMs and Tier1s are driving TSN, with Korea and China following. I know the German automakers are involved, as is Bosch and certainly European and Japanese semiconductor makers. Where do they get IP if they want integrated solutions? CAN, developed by Bosch, now stands at CAN 2.0. CAN-FD is a later extension of that standard, as is CAN-XL. Bosch unsurprisingly supplies IP for these standards. So does CAST, with now more than 150 customers for their CAN cores.

That’s an interesting story. CAST develop a lot of their own IP but they also work with a few partners, one of which is Fraunhofer IPMS in Dresden. Fraunhofer is a highly respected application-oriented research lab with nearly 27,000 employees. IPMS is involved in all the standards committees, well plugged into the explicit and implicit requirements that go with these things. IPMS develop the cores and CAST productize for the IP market.

Which means also working with a verification partner – Avery Design Systems – to develop and jointly cross-check IP against VIP through development. Nikos says these solutions are sufficiently polished now that, unlike many IP/VIP mixes, an OEM can bring both up in their testbench in a day with that proven VIP solution, also available directly through CAST.

So far, that’s it for CAN-* IP providers – Bosch and CAST. Good for CAST! And when TSN started attracting interest, they worked with IPMS to develop that also, now in production for 4 years. They were one of the first providers to come out with a solution. Naturally all these cores offer functional safety options and the CAN cores are ASIL-B ready; the TSN IP will achieve that approval soon. You should check out both CAST and Avery Design Systems for their automotive connectivity solutions.

Also Read:

Webinar: Learn about NVMe conformance Testing

Avery Levels Up, Starting with CXL

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express


Flex Logix and Socionext are Revolutionizing 5G Platform Design

Flex Logix and Socionext are Revolutionizing 5G Platform Design
by Mike Gianfagna on 08-17-2021 at 10:00 am

Flex Logix and Socionext are Revolutionizing 5G Platform Design

The world is buzzing with 5G deployment news. It seems the entire planet anxiously awaits the step function improvement in bandwidth and latency promised by this new technology. When there is additional deployment, it’s news. When there are new chipsets and devices supporting the standard it’s news. But when there is a fundamental shift in the way 5G platforms are designed, that’s big news and that’s the topic of this post. Read on to see how Flex Logix and Socionext are revolutionizing 5G platform design.

All advanced communication protocols, 5G included are constantly evolving. This process can be vexing for chip suppliers who build a part that targets a standard that becomes obsolete after tape out. Software updates can help, but the demanding throughput and latency requirements of standards like 5G are often incompatible with software. Dedicated hardware is typically required and the embedded programmable devices offered by Flex Logix are often exactly what’s needed.

Socionext recognized the opportunity here and decided to license Flex Logix’s EFLX® 4K eFPGA for a 7nm ASIC they are developing for a major communication company’s 5G platform. 5G and 7nm are certainly newsworthy but the programmability of the part is the real news. By leveraging the efficient programmability of the Flex Logix technology, Socionext can boost performance and reduce power. This is accomplished in part by eliminating a chip in the base station with the new design.

There’s a “secret sauce” backstory here as well. Carriers typically need to share proprietary software with their ASIC vendor to have it added to an FPGA. The new architecture allows the carrier to personalize their platform directly. Yutaka Hayashi, Vice President of Socionext’s Data Center and Networking Business Unit summed it up like this:

“While wireless base stations have always used FPGAs to provide carrier personalization and upgradability, the demands of 5G require higher performance while reducing system power and cost. This can be achieved by using an ASIC solution and by leveraging Flex Logix’s eFPGA in that design. Now that the ASIC becomes reconfigurable, it enables our wireless customers to deliver a flexible 5G platform that can support carrier specific requirements today and in the future.”

You can read the full press release here.  To get some more backstory on the announcement, I spoke with Ralph Grundler, senior director of marketing and architecture solutions at Flex Logix. The first thing I explored was the uptake of Flex Logix technology by ASIC vendors. Ralph explained that Flex Logix is already working with several ASIC vendors to integrate their technology into customer designs. The current announcement with Socionext is the first 5G application targeted at 7nm, however.  There is an interesting statement in the press release:

The EFLX4K DSP IP core replaces about ¼ of the LUTs with 40 multiplier-accumulators for DSP and artificial intelligence (AI) applications. The two EFLX4K cores can be tiled together to make larger arrays to support applications needing more LUTs as required, well over 250,000 LUTs with any mix of Logic and DSP cores.

The scalability implications of this statement are significant. Ralph explained that the technology today easily scales to an 8 X 8 single configuration of EFLX4K LUTs or DSP IP. Larger arrays or LUT requirements can be supported through modular design approaches much like the way chip designers use IP or functional subsystems to build an ASIC more efficiently.  With the wide variety of eFPGA sizes available, on-chip uses or configurations of eFPGA can support large accelerators accessed via a system bus or within a functional subsystem. This approach clearly offers a lot of performance headroom across a broad spectrum of use cases.

My discussion with Ralph concluded with an important observation. Support for embedded programmability was a hard requirement from Socionext’s end customer. It seems that system designers have figured out the importance of integrating reconfigurability for power and cost savings. The future will be interesting to watch as more silicon becomes adaptable to changing needs. You can learn more about customer adoption of Flex Logix technology here.  The trends seem clear. Flex Logix and Socionext are revolutionizing 5G platform design.


How Hyperscalers Are Changing the Ethernet Landscape

How Hyperscalers Are Changing the Ethernet Landscape
by Synopsys on 08-17-2021 at 6:00 am

How Hyperscalers Are Changing the Ethernet Landscape

It’s all about bandwidth these days – fueling hyperscale data centers that support high-performance and cloud computing applications. It’s what enables you to stream a movie on your smart TV while your roommate plays an online game with friends located in different parts of the country. It’s what makes big data analytics run swiftly and allows artificial intelligence (AI) algorithms to perform their magic and provide valuable insights for everyday gadgets and beyond.

As the data connectivity backbone for the internet, the Ethernet protocol is answering the call for increased bandwidth demands by supporting speeds of 200G, 400G and, now, 800G. Before long, 1.6T will not be out of the question. Going hand in hand with higher bandwidth is the need for efficient data connectivity over longer distances.

While networking equipment companies have historically influenced Ethernet speeds, hyperscalers are now disrupting the market and driving up speeds, while also influencing the Ethernet roadmap. In this post, which was originally published on the “From Silicon to Software” blog, we’ll take a closer look at the bandwidth needs of hyperscale data centers and how Ethernet IP supports scalable, high-data-rate connectivity requirements for data-fueled applications.

Exponential Data Growth

Even before the COVID-19 pandemic hit and pushed many of our daily activities online, bandwidth was growing at rates faster than previously forecasted. The graph below shows the demand over time driven by a variety of data-intensive applications. With the expanding ubiquity of internet of things (IoT), cloud storage, virtual and augmented reality, video streaming, and online collaboration applications, it’s no wonder that this volume of data is expected to grow exponentially.

Source: IEEE 802.3 Industry Connections Bandwidth Assessment, Part II

Last April, the Ethernet Technology Consortium (previously known as the 25 Gigabit Ethernet Consortium) announced the 800GBASE-R specification for 800G Ethernet. This specification introduces a new media access control (MAC) and physical coding sublayer (PCS), repurposing two sets of existing 400G Ethernet logic from the IEEE 802.3bs standard with some modifications to distribute data across eight 106-Gbps physical lanes. The Consortium is comprised of networking and data center industry leaders, including Synopsys, who want to move more aggressively than the IEEE in driving the standards for faster Ethernet networks. IEEE, meanwhile, formed a group last fall to consider the next transmission rate for Ethernet (800G as well as 1.6T are both in the conversation).

Scaling Up for a Data-Driven World

Hyperscale data centers—so named because they’re designed to scale up quickly and in a big way to meet increasing demands for compute, memory, storage, and networking resources—consist of thousands of servers managing petabytes (or more) of information. These massive workhorses are central to our online lives, so it’s not a surprise hyperscalers are dominating the conversation around bandwidth.

One of the ways in which hyperscale data centers scale is with the addition of racks that are networked through Ethernet. Some hyperscalers are now in the business of developing application-specific systems-on-chip to accelerate  machine-learning applications, connecting multiple processors and AI accelerators for faster and power-efficient high-performance computing.

Ethernet, however, has become the networking technology of choice because of its flexibility in supporting the same software even if the hardware in the system is replaced. The standard also boasts speed negotiation and backwards compatibility with the software stack – a device that’s 20 years old with an Ethernet port and a software driver can plug into a newer Ethernet network and still work. It can also utilize different kinds and classes of media: optical fiber, copper cables, PCB backplane, etc.

What Makes Ethernet-Based Designs a Challenge?

One of the challenges of designing with the multi-layer Ethernet protocol involves ensuring that the MAC, PCS, and the physical medium attachment (PMA) sublayers deliver the optimal performance and latency once integrated. If the pieces are from different vendors, establishing interoperability can be a tough task, particularly since 800G Ethernet has not yet been standardized by IEEE.

Electrical difficulties are another issue, as is the need for digital logic to handle the throughput (i.e., hardware with parallel processing, very fast clocks, etc.). Additionally, moving from 400G up to 800G entails using a 100G electrical PHY, whose performance across channel reaches presents a challenge.

PHY aside, 800G Ethernet can be regarded similarly as 400G but with faster inputs. At higher data rates, it becomes even more important to engineer the datapath to be as efficient as possible to drive the lowest latency. Forward error correction (FEC) will add latency to the network, and a longer haul will need more powerful FEC.

How Ethernet IP Facilitates High-Performance Compute

IP is an important part of the equation for Ethernet connectivity that ensures first-pass silicon success. A MAC+PCS+PHY integrated Ethernet IP can support increasing bandwidth and data rate requirements, low-latency needs, and interoperability expectations.

Addressing the challenges, Synopsys provides the industry’s only complete 200G/400G/800G Ethernet IP solution. The Synopsys DesignWare® Ethernet Controller IP portfolio for 200G/400G and 800G Ethernet complement our 112G/56G Ethernet PHY IP solutions in advanced FinFET processes, which enables Ethernet interconnects up to 800G. The resulting low-latency, high-performance Ethernet IP solution is ideal for networking, AI, and high-performance computing SoCs.

DesignWare Ethernet IP Solutions are IEEE-compliant and have undergone extensive third-party interoperability testing and certification, reducing integration risks, accelerating time-to-market, and enabling you to focus on product differentiation. The portfolio includes:

  • Configurable controllers and silicon-proven PHYs for speeds up to 400G/800G
  • Verification IP
  • Software Development Kits
  • Interface IP Subsystems

Once an Ethernet-based design has been developed, of course, it must be verified. This is where Synopsys VC Verification IP (VIP) for Ethernet 800G can help accelerate verification closure, providing a comprehensive set of protocol, methodology, verification, and productivity features. Implemented in native SystemVerilog and UVM, VC VIP runs natively on all simulators and can be integrated, configured, and customized with minimal effort. In addition, source code UNH-IOL test suites are available for key Ethernet features and clauses, allowing teams to quickly jumpstart their own custom testing and accelerate verification closure.

Summary

The amount of data being processed, shared, and consumed to fuel our Smart Everything world is rather mind-blowing. It is certainly changing the Ethernet landscape as hyperscalers drive up speeds to support the video conference calls, virtual reality games, and swift financial transactions that have come to define modern life. Ethernet IP and Verification IP solutions are proving to be up to the challenge in supporting the high performance and low latency required by an array of data-driven applications.

By Priyank Shukla and John Swanson, Staff Product Marketing Managers, Synopsys Solutions Group, and Anika Malhotra, Senior Product Marketing Manager, Synopsys Verification Group

Also Read:

On-the-Fly Code Checking Catches Bugs Earlier

Upcoming Virtual Event: Designing a Time Interleaved ADC for 5G V2X Automotive Applications

Optimize RTL and Software with Fast Power Verification Results for Billion-Gate Designs


You Get What You Measure – How to Design Impossible SoCs with Perforce

You Get What You Measure – How to Design Impossible SoCs with Perforce
by Mike Gianfagna on 08-16-2021 at 10:00 am

You Get What You Measure – How to Design Impossible SoCs with Perforce

We all know that a trusted, reliable, and well-integrated design flow is critical to successful advanced SoC design. So is proven, robust IP. While these elements are necessary for success, they are not, by themselves, sufficient. There are other aspects to consider – measurement, tracking and coordination. We’ve all heard the phrase; you get what you measure. This concept is especially true for complex chip design. Without solid management of assets and tracking of processes and results, success will, at best, be a lucky break that is not repeatable. The tools and infrastructure needed for this phase of chip design is the topic of this post. Read on to learn how to design impossible SoCs with Perforce.

DevOps Meets Chip Design

DevOps combines software development (Dev) and IT operations (Ops). The goal is to shorten the development lifecycle and facilitate continuous delivery with high quality results. The practice began in software development but has expanded to cover many industries and processes. Perforce is a company that focuses on delivering these kinds of solutions across many markets and products. According to its website, Perforce is a leading supplier of highly scalable development and DevOps solutions designed to deliver dynamic development, intelligent testing, risk management, and boundaryless collaboration. A broad and bold mission. Their tagline leaves an even stronger impression. It’s shown in the graphic at the top of this post.

When it comes to market penetration there are more bold statements, like Perforce solutions are used by 75% of Fortune 100 companies to innovate at scale. What is particularly interesting to the SemiWiki readership is this statement: 9 of the top 10 semiconductor companies use Perforce to get ahead of the competition with the best designs.

What’s In It for Me?

There have been many posts about Perforce and its acquisition of Methodics on SemiWiki. Topics such as how to achieve scalability, how to plug holes in your IP portfolio and how to securely collaborate in the cloudfor example. Let’s step back a bit and look at the big picture. What is the product portfolio offered by Perforce to optimize chip design? I’ll explore some of that next. There is also a very informative podcast on the topic coming soon. More on that in a moment.

Let’s start with what kind of DevOps tools are needed for IC design. Here is a list that should resonate:

  • IP and variant reuse and sharing
  • IP bill of materials (BoM) management
  • IP and design collaboration
  • End-to-end traceability
  • Design data management
  • A single source of truth across design and development

I’m sure you can think of a horror story (or two) when one of the above items went wrong in a complex design project. It turns out Perforce has quite a rich product portfolio that addresses all the above issues. The company works similar magic for many other industries, including:

  • Aerospace & defense
  • Automotive
  • Embedded systems
  • Energy
  • Financial
  • Gaming
  • Virtual production
  • Government
  • Medical devices
  • Software
  • Digital twins

That’s quite a list. With this many market challenges, the tools have become quite robust.  Regarding semiconductors, there are four key functional areas that form an integrated solution. They are:

IP Lifecycle Management: Track and manage IPs throughout their lifecycle.

Verification Traceability: Manage and provide end-to-end verification traceability.

IP Security Assurance: Find and track security threats in your SoC design.

Functional Safety: Ensure functional safety compliance with proven documentation.

You can even find a semiconductor starter pack on SemiWiki.

To Learn More

Beyond the solutions for mainstream semiconductor companies discussed so far, there is also a slightly different set of requirements and challenges faced by early-stage semiconductor companies. Dan Nenni and I recently caught up with Michael Munsey for a podcast. Michael is the senior vice president of marketing, business development and corporate strategy at Perforce, so he knows a thing or two about how to make chip design better.

We explored what specific challenges early-stage companies face with regard to IP management. What are the specific areas to focus on, what are the risks and what are the rewards?  It’s a compelling and informative conversation. The podcast will air on Friday, August 20. You can watch for it here. You’ll learn how to design impossible SoCs with Perforce.

Also Read

Achieving Scalability Means No More Silos

Your IP Portfolio is Probably Leaking. What Can You Do About It?

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud


S2C FPGA Prototyping solutions help accelerate 3D visual AI chip

S2C FPGA Prototyping solutions help accelerate 3D visual AI chip
by Daniel Nenni on 08-16-2021 at 6:00 am

aivatech product 1

3D vision technology is rapidly evolving. Compared to 2D vision technology that deals with planar information, 3D vision works with physical information, including depth, which makes it possible to recognize and measure objects with curved surfaces and arcs. In addition, as deep machine learning and big data computing technologies develop, 3D vision AI chips are continually improving their ability to process visual data with higher accuracy, faster speed, and lower power consumption. This has caused an explosion in demand for both the industrial and consumer markets.

Aivatech is an industry-leading solution provider for chip design and visual algorithm systems. With a focus on the smart terminal market, it provides open modular 3D vision AI software and hardware platforms with ultra-low power consumption. In May 2020, Aivatech successfully released Ai3100, its first new-generation 3D visual AI chip. In December 2021, the company launched an upgraded 3D visual AI chip: Zhuiying Ai3101. This provided the smart terminal market with a full-stack solution, pushing forward the rapid deployment of AI technologies such as smart door locks, robots, hardware, and new retail.

Zhuiying Ai3101 is a new generation of 3D-vision AI chips based on heterogeneous architecture. With a built-in neural processing unit (NPU), 3D engine, HDR, ISP, and more, the chip is a leader in its field in terms of efficient and intelligent processing, analysis, and low power management. AI 3101 has a very powerful and flexible 3D depth engine (i.e., a fusion accelerator for depth calculations): it allows depth to be measured from 0.2 m to 6 m, with a 50 mm baseline.

S2C and Aivatech have had a long-term, stable relationship. Aivatech currently used two FPGA validation platforms: PCIE VU440 and Single VU440 LS. Using the PCIE VU440, our R&D team has carried out the core IP prototyping and system verification; and based on the Single VU440 LS, they’ve accelerated SOC design verification and firmware driver development. The S2C validation platform provided us stable software and hardware collaborative work environment, especially in the early stage of chip development, recalls Guo Wei, Co-founder/Vice President of Engineering at Aivatech.

Both products are high-capacity, flexible, and scalable while also equipped with multiple interfaces. This makes Aivatech suitable for R&D needs and greatly shortens the R&D cycle. S2C’s professional engineering support team provided a tremendous deal of support and peace of mind when it came to developing our 3D vision AI chip project.

IP validation with the S2C PCIE VU440 platform

Aivatech’s SOC validation with the Single VU440 LS platform

S2C’s Virtex UltraScale (VU) Prodigy Logic Systems are based on Xilinx’s Virtex UltraScale XCVU440 FPGA. These market-leading Prodigy Logic Systems are shipped with a low-profile enclosure that includes all components – FPGA module, extendable power control module and power supply for maximum flexibility, durability and portability. Advantages of using S2C’s VU Prodigy Logic Systems include:

  • Modular and All-in-One design offering the highest flexibility and performance
  • Abundant prototyping tools (partition and debugging) support to speed up the prototype bring-up time
  • Easy reconfiguring or stacking to expand capability for several projects
  • Comprehensive portfolio of Prototype Ready Accessories for quickly build the prototype targets

All versions of the VU Prodigy Logic Systems can be used as Standalone Mode or Multi-Board Mode. The Prodigy Logic System are supported by the industry’s broadest set of advanced FPGA solutions in S2C’s Prodigy Complete Prototyping Solutions including:

  • Prodigy ProtoBridge™ for linking FPGA prototyping to system-level simulations
  • Prodigy Multi-Debug Module for multi-FPGA deep trace debug
  • Vast library of 90+ daughter cards to meet a variety of interface needs

With the mission of “Empowering smart terminals with 3D vision AI”, and based on its self-developed Zhuiying™ series of AI chips and supporting vision modules, Aivatech has already launched an industry-leading full-stack solution empowering door entry access control, robots, smart hardware, new retail, and many other AI applications.

Also Read:

Prototypical II PDF is now available!

StarFive Surpasses Development Goal with the Prodigy Rapid Prototyping System from S2C

CEO Interview: Toshio Nakama of S2C EDA


Your Car Should be Smarter

Your Car Should be Smarter
by Roger C. Lanctot on 08-15-2021 at 10:00 am

Your Car Should be Smarter

SEC. 24102. HIGHWAY SAFETY PROGRAMS.

Section 402 of title 23, United States Code, is amended — by striking ‘‘accidents’’ each place it appears and inserting ‘‘crashes’’; by striking ‘‘accident’’ each place it appears and inserting ‘‘crash’’ — from amendment to Senate infrastructure legislation

The Biden Administration and the U.S. Congress want cars to be smarter. My two year old BMW can detect when the distance between my vehicle and a stopped vehicle is closing too fast and will make an unignorable sound (with visual in-dash alerts) letting me know I need to apply more braking force immediately. If I change lanes without signaling, my car will gently resist and simulate the sensation of driving over small bumps in the road. And if I have been driving too long and I am starting to drift within or beyond my lane, my car will offer to find me a place to stop for a rest with a message on the center cluster display.

This kind of collaborative driving assistance – manifest in both passive and active safety systems (sometimes called advanced driver assist systems or ADAS) – is proliferating throughout the industry and across new vehicles from all car makers. While autonomous driving technology grabs the headlines, it is ADAS tech that is taking the wheel with the objective of saving lives.

Safety sells and safety is a high-demand consumer purchasing priority, but it is regulators in Europe and the U.S. that are driving adoption rates upward. Governments in both regions are focused on reducing highway fatalities and the related social costs.

Multiple provisions in the $1.2T infrastructure bill that passed the U.S. Senate this week call for intensified regulatory activity in the U.S. to increase the rate of adoption and deployment of these technologies in new cars. But these legislative efforts highlight the tortured path to implementation that awaits implementation.

Consumer Reports published a commentary highlighting the measures intended to impact the auto industry including:

“Infrastructure Bill Omits Key Safety Features in Cars” – Consumer Reports

Impaired-driving tech: The bill includes a requirement that over the next decade, all new cars be equipped with advanced technology that could prevent drunk and impaired driving by passively detecting when a driver is impaired.

Recall and safety information: The bill includes requirements for automakers to be more transparent about the performance of safety recalls, as well as improvements to make federal vehicle safety databases more accessible to the public. The bill particularly seeks a state-by-state survey of rideshare operators and whether vehicles used by those transportation network companies are subject to recalls.

Advanced lighting: The bill promotes better headlights in cars and trucks, including adaptive beams that adjust brightness based on traffic conditions to improve visibility.

Crash-test dummies: A new study will seek to identify ways to improve crash-test dummies so that they better represent women, the elderly, and other under-represented demographic sectors.

Keyless ignition shutoff: A new requirement will put automatic engine shutoff mechanisms in cars with keyless ignition switches to prevent carbon monoxide poisoning from a vehicle that is inadvertently left running.

Safer streets: Included in the bill is the Safe Streets and Roads for All grant program, which is designed to support data-driven local initiatives to prevent road deaths and injuries.

Better data: The pending infrastructure bill calls for focused research into the needs of vulnerable road users such as pedestrians and cyclists in an effort to increase their safety on public roads.”

Also included in the bill are provisions for vehicles to stop or turn off automatically when the driver exits the vehicle before putting it in park in order to prevent vehicle rollaways. The bill further calls for widespread lane departure warning and lane keeping technology adoption after research and agency guidance and for driver distraction detection and mitigation systems.

The bill prioritizes and funds research into measures to protect vulnerable road users including pedestrians and bicyclists.  A requirement for developing a data-centric approach to defining a comprehensive safety action plan along the lines of existing Vision Zero campaigns identifying traffic crash and fatality hotspots and addressing the causes of crashes.

The bill calls for the creation of a Motorcycle Advisory Council; a review of school bus safety; a review of “move over or slow down for law enforcement” public messaging and laws; prioritization, identification, and segregation of crash data related to personal conveyance vehicles such as bicycles and scooters; minimum penalties for repeat impaired driving offenders; a review of seatback and hood and bumper safety standards; and expanding the scope of impaired driving beyond alcohol.

The Consumer Reports criticism of the legislation is that even in those instances where the bill calls for action there are few timelines or deadlines provided.  But CR misses the real limitation of legislative efforts to advance vehicle safety – enforcement.

Without the support, implementation, and enforcement authority of the U.S. Department of Transportation’s National Highway Traffic Safety Administration, none of the proposed measures will have any impact whatsoever. The most notable historical example is the Cameron Gulbransen Kids Transportation Safety Act of 2007.

The Act was approved in February 2008 and directed “the Secretary of Transportation to issue regulations to reduce the incidence of child injury and death occurring inside or outside of light motor vehicles, and for other purposes.” The Act was widely understood as requiring the adoption of a backup camera mandate.

In 2008, President George W. Bush gave federal transportation officials three years to draft wording on a backup-camera mandate.  The three years were necessary for the NHTSA to conduct research in order to drive the rulemaking process behind the backup camera requirement – including assessing different ways to solve the problem.

Five years later, the U.S. Department of Transportation still hadn’t drafted a rule. A group that included Gulbransen sued the agency to comply with Bush’s directive. A year after the lawsuit, the NHTSA issued a 2018 deadline for backup cameras on all new autos – 14 years after two-year-old Cameron Gulbransen was killed by a vehicle that was backing up driven by his father.

The same sorrow-fueled outrage that drove the adoption and implementation of the backup camera mandate lurks behind many of the measures in the pending infrastructure bill. The anti-rollaway provision alone reflects the grief and resulting lawsuits brought by the families of individuals run over by their own cars. Toyota has been the subject of at least one of these rollaway lawsuits resulting in part from the fact that data extracted from the subject vehicle revealed its awareness of the driver’s exit while the vehicle was not in park.

SOURCE: Vehicle Control History (VCH) data from a vehicle involved in a fatal vehicle rollaway.

The Biden Administration and its allies in Congress are saying – with their recent passage of infrastructure legislation – that we have a right to expect our cars to do more to protect and save our lives. Cars are capable of being even more intelligent than they already are. Let’s hope we don’t have to wait 14 years for new life saving measures to be adopted by the automotive industry. The infrastructure bill provides a key roadmap for that process.


Samtec Dominates DesignCon 2021

Samtec Dominates DesignCon 2021
by Mike Gianfagna on 08-15-2021 at 6:00 am

Samtec Dominates DesignCon 2021

DesignCon has grown over the years to become a true system design show. The show’s tagline is WHERE THE CHIP MEETS TO BOARD. This is just the beginning. Besides the chip and the board there are all the challenges, opportunities, and options to get signals reliably propagated throughout the entire system. Power, signal integrity, advanced materials and communication channels are just some of the topics to be explored. Samtec focuses on delivering the technology to implement high performance, low loss communication channels, so they fit the show well. There’s more to the story, however. In a past life, I did many trade shows with Samtec. I can tell you they are superb at building a strong presence at any show. Their partner ecosystem, compelling demos and in-depth tech talks all contribute to a high-profile presence. DesignCon is no different. My message is simple – Samtec dominates DesignCon 2021.

DesignCon is being held this week from August 16 – 18, 2021 at the San Jose McEnery Convention Center. That’s not a misprint; DesignCon will hold a live event this year. You can learn more about the show, including how to register here. As memorialized in the banner of this post, you can find Samtec at booth 907. Before I get into what they’ll be doing at the show, I want to mention that Samtec typically has fun giveaways. A word to the wise – get there early in case they run out. Let’s look at the substantial footprint Samtec will have at DesignCon.

Demos in Booth 907

Samtec comes to any trade show with a lot of compelling demonstrations, staffed by knowledgeable folks who can actually answer the hard questions. A nice part of the strategy is to promote their partner ecosystem. I personally was the recipient of this great hospitality in a prior company. Here’s a summary of their featured demos and who they will collaborate with:

  • 112G PAM4 with Alphawave: The innovative design of Samtec’s NovaRay® Arrays combines extreme density and extreme performance, which is critical as system sizes decrease and speeds increase. The fully shielded differential pair design contributes to the industry-leading 4.0 Tbps aggregate data rate.
Samtec NovoRay
Samtec Bulls Eye
Samtec AcceleRate
  • V-band/60GHz, Flexible waveguide with Vubiq: This proof-of-concept V-band demonstration will run actual 10 Gbps Ethernet modulated traffic at 60 GHz through Samtec’s flexible waveguides, using Vubiq’s Haul Pass V10G product mainboard and Analog Devices HMC 6300 and 6301 chipsets.

 Beyond the show floor, Samtec also dominates the technical sessions.

Technical Presentations

  • Tuesday, August 17 • 8:00 AM – 8:40 AM Specification-based IBIS-AMI model PCIe 5.0 32 GT/s.Samtec will demonstrate how to convert electrical specification documents for PCIe 5.0 32 GT/s and generate an equivalent IBIS-AMI model to represent the significant electrical signaling behaviors.
  • Tuesday, August 17 • 11:10 AM – 11:50 AM A Case Study in the Development of a 112 Gbps-PAM4 Silicon & Connector Test Platform. The continued progression to higher data rates puts increasing demands on the design of practical Serdes channels. At 112G-PAM4, the UI is only 17.86 ps, and signal transmission in the PCB must be highly optimized for loss, reflections, crosstalk, and power integrity.
  • Tuesday, August 17 • 2:00 PM – 2:40 PM Hidden Secrets of IBIS Sampling Specifications. The I/O Buffer Information Specification-Algorithmic Modeling Interface (IBIS-AMI) enables sharing of a model, which encompasses the complexity of the transmitter and receiver blocks. The IBIS-AMI model outputs an equalized waveform along with sampling information for the EDA tool.
  • Tuesday, August 17 • 3:00 PM – 3:40 PM Design Case Study & Experimental Validation for a 100 Gb/s Per Lane C2M Link Using Channel Operating Margin. The Chip-to-Module (C2M) interface as specified by the IEEE 802.3 Standard Working Group, and currently being updated for higher data rates, implements links that must perform up to 800 Gb/s (8 × 100 Gb/s) within the internet infrastructure physical layer. The design of these channels requires multiple engineering disciplines that fused together to create a comprehensive workflow.
  • Wednesday, August 18 • 9:00 AM – 9:40 AM Impact of Power Plane Termination on System Noise. To reduce power rail voltage fluctuations that could lead to noise emissions, it is critical to keep the power plane’s impedance below a target and minimize impedance peaks over frequency. Previous studies have shown that RC power plane termination can reduce power plane impedance peaks, including on an electrically dense production board where decoupling capacitors would not fit near-critical memory components.

And to round out the agenda:

Panel Discussions

  • Monday, August 16 • 4:45 PM – 6:00 PM Panel — PCIe 6.0: New Challenges & New Tests for an Old Standard. The consumer and market demand for higher data throughput has been pushing industries and standards to increase data rates. The evolution of other standards has also been pushing technologies such as PCIe to higher data rates. PCIe 5.0 with data rate of 32 GTps had already introduced many signal integrity and design challenges.
  • Tuesday, August 17 • 4:00 PM – 5:15 PM Panel — Avoiding Disaster: Planning for Laminate Electrical Properties as a Function of Temperature. How much variation should be expected and what should OEM designers of high-speed systems do to accommodate these variations in pre-prototype signal-integrity simulations?

Now you know at least some of what Samtec will be doing at DesignCon. If you are engaged in any form of system design, this is a must-attend event.  My prior warning still applies; get ready, Samtec dominates DesignCon 2021.