Bronco Webinar 800x100 1

OnStar: Getting Connectivity Wrong

OnStar: Getting Connectivity Wrong
by Roger C. Lanctot on 04-24-2022 at 6:00 am

OnStar Getting Connectivity Wrong

One of my pet beefs with the car industry is that car makers, on the whole, have failed to agree among themselves as to what basic vehicle connectivity ought to consist of. From car maker to car maker prices vary, bundles vary, free periods of service access vary and the variations get worse between model years as offers change and systems are modified.

The 26-year veteran of vehicle connectivity – OnStar – is one of the worst offenders. OnStar offers four basic service tiers ranging in price from $24.99 to $49.99 per month. In other words, GM appears to believe that consumers will pay more for vehicle connectivity than they would for Hulu, Amazon Prime, Netflix, Apple+, or HBO.

At that high end, GM OnStar is priced in the neighborhood of a mid-range gym membership. What do they think they’re selling?

If that weren’t bad enough, OnStar has a host of “a la carte” offerings which only add to the cost and confusion. This is connected vehicle “monetization” malpractice.

But GM is not alone. Toyota and BMW, to name just two competing auto makers, have spreadsheets to explain which services are available on which trim levels for which years at which price points.

But the confusion doesn’t stop there. GM’s uncoordinated and awkward approach to connectivity extends to its Cruise Automation subsidiary.

The latest misadventure at Cruise – a Cruise robotaxi wandering away from police responding to a Cruise vehicle with its headlights off – highlights the reality that Cruise remains siloed off from GM. Cruise vehicles are clearly not equipped with OnStar, and this disconnect might well prove fatal.

San Francisco Police Stop Self-Driving Car: – https://www.nbcbayarea.com/news/local/san-francisco/driverless-car-traffic-stop-san-francisco/2860690/ – NBC

The death of Elaine Herzberg in a collision with a self-driving Uber in Tempe, AZ, led to the termination of Uber self-driving car testing and the shedding of the Advanced Technology Group that was working on the technology. It only takes one slip to erase billions of dollars of corporate value.

More than a decade ago GM added a “remote vehicle slowdown” function to OnStar as an enhancement to its stolen vehicle tracking and recovery solution. Since GM’s introduction of the feature, Hyundai has added it to its available connected services as well.

The function allows police – who have been alerted to a stolen GM vehicle and who have the vehicle in sight – to remotely slow the vehicle down to a stop. That function would have been awfully nice to have built into the wayward Cruise vehicle that appeared to be evading the police – but was really reacting to the flashing lights on the police cruiser.

The lack of uniformity in connected vehicle services from all auto makers – with the exception of Tesla – reflects the dysfunctional pursuit of aftermarket subscription-based service revenue. Tesla generally charges $10 a month and includes software updates and a range of connected services in that single price.

Ten dollars a month falls into the category of a no brainer for the average Tesla buyer. GM’s $24.99/month basic service is a no-go for many new car buyers.

The strangest thing of all is that nearly every auto maker, with the exception of Tesla, has put together an automatic crash notification capability – but all charge for it rather than viewing customer care as a core brand building value proposition. The automotive industry ought not to be charging for the automatic crash notification – it’s like paying for a fire extinguisher in your hotel room.

The crowning stroke of all of the misguided connected car decision making at OEMs, though, is the decision not to include basic diagnostic data communications and software updates (along with automatic crash notification) in an inexpensive basic connectivity package. One of the most amazing brand building value propositions for Tesla has been the company’s ability to provide post-crash analytics for regulators and the press. Time and time again Tesla has exonerated itself – blaming misbehaving drivers – for crashes.

This contrasts brightly with GM’s denials, eight years ago, that it had any idea there was a problem with its ignition switches – prior to the massive government fine and mandated vehicle recalls to correct the problem blamed for multiple fatal crashes. Similarly, Toyota pleaded ignorance of the existence of or an explanation for unintended acceleration events.

It’s time for auto makers to include a basic level of connected services with their vehicles including crash notification, software updates, and basic vehicle diagnostics. There is no room for plausible deniability, and confusing subscription schemes are the enemy of successful connected car programs and safer driving.

Also read:

Tesla: Canary in the Coal Mine

Chip Shortage Killed the Radio in the Car

A Blanche DuBois Approach Won’t Resolve Traffic Trouble


LRCX weak miss results and guide Supply chain worse than expected and longer to fix

LRCX weak miss results and guide Supply chain worse than expected and longer to fix
by Robert Maire on 04-23-2022 at 6:00 am

Lamb to the Slaughter LRCX

-Lam missed on both top & bottom line due to supply chain
-Previous guide was “overly optimistic” about fixing issues
-Demand is great but doesn’t matter if you can’t serve it
-We remain concerned about ability to fix issues in near term.

A miss on numbers- supply issues will persist

Lam reported Revenues of $4.06B versus already reduced street expectations of $4.25B. EPS also missed coming in at $7.40 versus street of $7.51. Results would have been even worse if we back out one time gains from Lam’s venture investments which were $0.11 per share. Guidance was even weaker with June expected to be $4.2B +- $300M and EPS of $7.25+- $0.75 falling well short of street expectations of $4.46B and EPS of $8.24.

Obviously things are not getting better. We hope that Lam has taken enough of a haircut to their numbers that they don’t miss the June guidance

Management says it was “overly optimistic” about fixing supply chain

Lam management previously thought that the supply chain issues would be solved more quickly which turned out not to be the case, in fact its quite clear that issues will persist into the June quarter and likely beyond. We don’t think management will quote an expected end date to issues again after being “overly optimistic” about resolving them. It sounds like things may be getting worse/broader. Deferred revenue grew and likely grow again due to missing parts

With roughly $2B in differed revenue and the probability that June will see a further increase in differed revenue there is a lot of product that has been shipped to the field that is missing parts that may or may not get delivered.
We also wonder about the higher costs/lower gross margins implied by doing field installs of missing parts on incomplete tools.

This is obviously a messy situation, using customer premises to store unfinished tools which would otherwise clog Lam’s production facilities.
Customers are likely willing to accept this sub optimal solution likely because Lam forces it on them lest they lose their place in line but its a poor substitute.
We would hope that these unfinished tools get completed in the second half of the year and the deferred revenue can come home.

Move to Malaysia doesn’t help

Unfortunately Lam’s move of significant production was at an inopportune time. Trying to bring up new sub suppliers in Asia during stressful times in the supply chain is not very good timing. Many of those potential suppliers already have issues and Lam as a new customer will take more time and a lower priority.

This likely stretches out the time to move to Asia resulting in more costs/double costs while Malaysia coming on line takes longer.

We remain concerned about possible share loss inability to gain share

If Lams competitors can fix their supply chain issues while Lam can’t, we could see share shift as desperate customers will easily move to suppliers who can supply. Right now everyone seems in the same boat but that could change. Its also harder for Lam to gain share with new products that just may not be available to customers.

Generally in time of short supply there is likely less ability to gain share

More vertically integrated manufacturers may fare better as they have more control over their supply chain. Some competitors, in Japan, such as TEL, typically have stronger relationships with suppliers that almost look like a vertical supply chain.

Should the industry go back to being more vertical than outsourced?

We had asked the question of Lam management several years ago at an analyst meeting about moving away from from a fully outsourced model to a more vertical model as the industry has become less cyclical and thus more suited to vertical supply. The company said they were thinking of it but never did it.

We think that many in the semiconductor tool business have to take a harder look at getting more vertical such that they have better control over their parts and manufacturing. We are at a point in the industry where it is much more important to insure supply than outsource risk of cyclicality.

The stock

Obviously the results are disappointing as is the guidance. It’s also disappointing that management underestimated the length and severity of the situation.

We don’t think there has been a lot of damage but that could change. Lam should be less impacted than ASML as their components are much less complex than ASML’s systems and sub components but it appears that Lam has been even more impacted which we find strange. Given the lens issue ASML has a better excuse for supply chain issues.

Lams stock will obviously take a hit here and we think upside may be limited until Lam can factually state that the issues have been finally fixed and deferred revenues start to come home.

After missing the projected recovery we are likely in wait and see mode.
We see no reason to get in to the stock until we have better clarity and no real reason to continue to own it as we don’t see near or medium term upside.

Also read:

Chip Enabler and Bottleneck ASML

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022


Podcast EP73: Adventures in Supercomputing with Luminous Computing and Andes Technology

Podcast EP73: Adventures in Supercomputing with Luminous Computing and Andes Technology
by Daniel Nenni on 04-22-2022 at 10:00 am

Dan is joined by Dr. Dave Baker, VP digital design at Luminous Computing and John Min, director of field applications at Andes Technology. Dave and John explore with Dan their collaboration to build high-performance supercomputers.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Chip Enabler and Bottleneck ASML

Chip Enabler and Bottleneck ASML
by Robert Maire on 04-22-2022 at 6:00 am

ASML Zeiss SemiWiki

-ASML reported an “in line” Q1- Orders remain super strong
-Ongoing supply chain issues will limit growth and upside
-ASML targets 2025 for supply fixes- We are not so sure
-Intel, TSMC, Samsung won’t be able to build all fabs they plan

ASML has “In linesh” Q1, orders still off the charts

ASML reported Euro3.5B in revenues and EPS of Euro1.73. Revenues were slightly light while earnings were a slight beat. Margins were 49%. More importantly orders remain very strong at Euro7B including Euro2.5B of EUV and multiple high NA systems.

Orders continue to outstrip ability to supply so more of the focus of both management and investors will be on ASML’s ability to ramp their supply chain to meet demand.

Talking about 2025 as target to fix supply issues

“Lets keep our fingers crossed and see what 2025 brings us” -Peter Wennick on the earnings call, not very confidence building. 2025 was mentioned many times on the call as the target to fix the supply chain issues that are limiting ASML’s ability to ship tools. We think many investors misunderstand the factors limiting ASML’s supply chain and therefore growth.

ASML is not limited by chips or current issues in Europe due to Ukraine or even Covid related issues. The supply chain issues are unique and specific to ASML, and ASML suppliers. Suggesting that 2025 will be the answer is more of a current hope than definitive plan that is in place to insure that issues will be fixed.

Zeiss is the key bottleneck and immovable object in the road to growth

While ASML is the key enabler to the chip industry, Zeiss is the key enabler to ASML. Zeiss makes the key optics that are the differentiator that makes ASML tools work. There is no second source, ASML is totally dependent upon Zeiss.

Most investors do not understand that Zeiss is not a normal company with shareholders. Zeiss is a foundation. The stated target of the foundation is furthering of science and insuring the employees well being and continued employment. Profit and growth is an afterthought. It is essentially run for the betterment of employees not profit. It is German labor unions and labor relations taken to an extreme.

Being the oldest such foundation in Germany also makes it slower to change.
One of the current issues is that Zeiss does not have enough space to increase production and doesn’t want to ruffle the feathers of neighbors with construction.

They is also the fact that not a lot of young Germans want to apprentice for years to polish glass for the rest of their lives. ASML just may be stuck with a supplier that can’t respond as quickly as needed and also just doesn’t care to respond as quickly as needed and doesn’t have to. It’s like trying to get a 175 year old Galpagos tortoise to run a sprint. Not gonna happen.

This all trickles down to Intel, TSMC & Samsung not building fabs

The demand for EUV tools, let alone High NA tools, far outstrips ASML’s ability to deliver them. Somethings gotta give. Intel, TSMC and Samsung can place all the orders they want and they will just pile up on ASML’s desk.

TSMC is far ahead of Samsung in EUV tool count and Intel is a distant third. Other companies in the memory space are also entering the EUV club and placing orders as well. This means order growth likely well in excess of the roughly 20% limit of production growth…in other words a significant shortfall.

It likely that TSMC could take up 100% of ASML’s EUV production by itself. Intel can never hope to catch TSMC until it can get more EUV tools.

Basically there is no way the chip industry will be able to get enough tools for all the plans of fabs today and either fabs will not be built or remain empty shells until EUV tools become available. Intel recently uprooted an EUV tool from Portland to send to Europe which is something you never want to do unless you are very desperate.

In a way this is likely a good thing for the industry in that it will put off the oversupply cyclicality that has historically plagued the industry. It will allow prices to remain higher for chips due to shortages and will allow ASML to charge whatever it wants.

Maybe ASML could charge more for a “Fastpass” to cut the EUV line much like Disney does. Could financial buyers take a place in line and “scalp” EUV tools?

Chip industry needs alternatives to current lithography process

The chip industry is clearly limited by ASML and Zeiss. The industry desperately needs either an alternative to existing lithography process and tools such as E beam direct write and DSA (directed self assembly) or process enhancements to existing lithographic process that can speed or make more efficient use of existing lithographic tools to get more output.

We don’t need more dep and etch tools. Maybe some more yield management to help optimize EUV and litho tools. Necessity is the mother of invention.

The Stock

ASML is in the enviable position of being a monopoly in an industry desperate for their tools. This will not change any time soon, in fact the gap between demand and supply of litho tools will likely only get worse over the medium term as orders pile on without a corresponding increase in supply.

While the quarter reported was only OK, the order news says that the shortage of ASML tools will continue. ASML will be able to increase capacity but we think it will take far longer than most investors expect or understand. We would be careful not to extrapolate and expect too much growth out of ASML due to the limitations inherent in their system.

We expect that ASML’s stock while down for the year will respond positively as there were some concerns about demand slowing. The issue is clearly not demand but ability to supply… its a better problem to have as it is much more fixable, but will take time.

Also read:

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022

Intel buys Tower – Best way to become foundry is to buy one


Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution
by Kalar Rajendiran on 04-21-2022 at 10:00 am

Truechip TruEYE GUI

Integrating IP to build SoCs has been consistently on the rise. Growth in complexity and meeting time to market pressures are some primary drivers behind this phenomenon. Consequentially, the IP market segment has also been enjoying tremendous growth. While this is great news for chip design schedules, it does highlight the increased demand for quick, easy and accurate verification. Without a time and cost efficient way to verify an IP solution, the cost of verifying can end up being higher than the cost of developing the IP itself. And an SoC’s development schedule would be adversely impacted. Naturally, the Verification IP (VIP) segment of the IP market has seen high growth rates.

There are IP verification solutions offered by a number of companies. One company that was introduced in late 2020 to the SemiWiki audience is Truechip. Founded in 2008, Truechip characterizes itself as the Verification IP Specialist. It offers an extensive portfolio of VIP solutions to verify IP components interfacing with industry-standard protocols integrated into ASICs, FPGAs and SoCs.

Salient Aspects of Truechip’s VIP Solutions

Truechip’s Verification IPs are fully compliant to standard specifications and come with an easy plug-and-play interface to enable a rapid development cycle. The VIPs are highly configurable by the user to suit the verification environment. They also support a variety of error injection scenarios to help stress test the device under test (DUT). Their comprehensive documentation includes user guides for various scenarios of VIP/DUT integration. Truechip’s VIP solutions work with all industry-leading dynamic and formal verification simulators. The solutions also include Assertions that can be used in formal and dynamic verification as well as with emulations.

And their solutions come with the TruEYE GUI-based tool that makes debugging very easy. This patented debugging tool reduces debugging time by up to 50%.

Truechip’s DisplayPort 2.0 VIP Solution

One interface IP that is gaining lot of attention these days is the DisplayPort IP. Truechip has been supporting the Display market segment with VIP solutions for HDMI, HDCP and DisplayPort. They recently expanded their portfolio with the addition of DisplayPort 2.0 VIP solution. Their DisplayPort 1.4 VIP has a long track record within the customer base. Their DisplayPort 2.0 VIP has brought lot of upgrades to keep up with the enhancements from DisplayPort 1.4 to 2.0. The following Figure depicts a block diagram of the corresponding VIP environment.

The DisplayPort 2.0 VIP is fully compliant with Standard DisplayPort Version 2.0 specifications from VESA. Nonetheless, it is a light weight VIP with easy plug-and -play interface for a rapid design cycle and reduced simulation time. The solution is offered in native System Verilog (UVM/OVM/ VMM) and Verilog, with availability of compliance and regression test suites.

Some Salient Features of Truechip’s DisplayPort 2.0 VIP Solution

  • Supports High Bandwidth Digital Content Protection System Version 1.4, 2.2 and 2.3.
  • Supports Multi-Stream Transport (MST)
  • Supports Link Training(LT) Tunable PHY Repeaters (LTTPR)
  • Supports Reed-Solomon Forward Error Correction RS(254,250)
  • Supports multi lane configuration (up to 4 lanes)
  • Supports DSC v1.2a (Compressed Display Stream Transport Services)
  • Supports DisplayPort Configuration Data (DPCD) version 1.4
  • Support of legacy EDID is provided
  • Supports I2C over AUX Channel and Native AUX
  • Supports dynamically configurable modes.
  • Supports Dynamic as well as Static Error Injection scenarios.
  • On the fly protocol checking using protocol check functions, static and dynamic assertions
  • Built in Coverage analysis.
  • TruEYE GUI analyzer tool to show transactions for easy debugging

Deliverables

  • DisplayPort 2.0 BFMs for:
    • Source – Link Layer
    • Source – MAC Layer
    • Source – PHY Layer
    • Sink – Link Layer
    • Sink – MAC Layer
    • Sink – PHY Layer
    • Branching Devices
  • DisplayPort layered monitor & scoreboard
  • Test Environment & Test Suite :
    • Basic and Directed Protocol Tests
    • Random Tests
    • Error Scenario Tests
    • Assertions & Cover Point Tests
    • Compliance Test Suite
    • User Test Suite
  • Integration guide, user manual, and release notes
  • TruEYE GUI analyzer to view simulation packet flow

About Truechip

Truechip, the Verification IP specialist, is a leading provider of Design and Verification solutions. It has been serving customers for more than a decade. Its solutions help accelerate the design cycle, lowers the cost of development and reduces the risks associated with the development of ASICs, FPGAs and SoCs. The company has a global footprint with sales coverage across North America, Europe and Asia. Truechip provides the industry’s first 24×5 support model with specialization in VIP integration, customization and SoC Verification.

For more information, refer to Truechip website.

Also Read:

Bringing PCIe Gen 6 Devices to Market

PCIe Gen 6 Verification IP Speeds Up Chip Development

USB4 Makes Interfacing Easy, But is Hard to Implement


The ASIC Business is Surging!

The ASIC Business is Surging!
by Daniel Nenni on 04-21-2022 at 6:00 am

Alchip Revenue

Application Specific Integrated Circuits were the foundation of the semiconductor industry up until the IDMs came to power in the 1980s and 90s. Computer companies all had their own fabs, I worked in one, until start up companies like SUN Microsystems started using off the shelf chips from Motorola. SUN moved to the fabless model and designed their own SPARC chips but Intel was too powerful, with Windows and Linux they took over the CPU space forthwith.

During this transition quite a few semiconductor companies adopted the ASIC model and designed and manufactured chips for other systems companies. IBM, NEC, Toshiba, come to mind and there were also dedicated ASIC companies like VLSI Technology and LSI Logic who had their own fabs.

The fabless transformation changed all of this of course and now the ASIC market is dominated by fabless companies. The ASIC business today is split into two categories: There are pure-play fabless ASIC companies (GUC, Faraday, Alchip, Sondrel, Verisilicon, Alphawave, SemiFive, eInfochips, etc…) and Chip companies that also do ASICs (Broadcom, Marvell, MediaTek). Broadcom has the former Avago ASIC business and Marvell acquired eSilicon and the Globalfoundries/IBM ASIC business. MediaTek grew their ASIC ambitions organically.

Why is the ASIC business surging you ask? The same reason EDA, IP and TSMC are surging. Systems companies from all walks of life are now doing their own ASICs. It really has come full circle.

Alchip recently released numbers that are quite telling with four consecutive years of record setting performance. Alchip, founded in 2003, is headquartered in Taipei but has roots in the US and Japan.

One of the more telling pieces of data from Alchip is that the majority of their revenue (88%) came from FinFET designs from 16nm down to 5nm, some with complex packaging (CoWos and MCM).

Alchip President and CEO Johnny Shen expects strong demand in 2022 to come from AI, HPC, and IoT. He also pointed out that a select number of large production-quantity leading edge AI devices entered mass production in 2021, accounting for the company’s record performance.

Also in 2021, Alchip, a TSMC-certified Value Chain Aggregator, taped-out a significant number of 16nm, 12nm and 7nm designs. Several of the 7nm designs involved advanced packaging technology. A number of the 5nm designs will tape out in 2022 and a 3nm test chip is under development, with expected tape out in late 2022.

Bottom line: The ASIC business is the semiconductor world’s oldest profession, much of which is done undercover. The chip supply constraints of the roaring 20s have done more for ASIC’s than anyone could have imagined and will continue to do so, absolutely.

About Alchip
Alchip Technologies Ltd, headquartered in Taipei, Taiwan, is a leading global provider of silicon design and production services for system companies developing complex and high-volume ASICs and SoCs.   The company was founded by semiconductor veterans from Silicon Valley and Japan in 2003 and provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced, including 7nm processes. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661) and the Luxembourg stock exchange and a TSMC-certified Value Chain Aggregator.

Also read:

Alchip Reveals How to Extend Moore’s Law at TSMC OIP Ecosystem Forum

Alchip is Painting a Bright Future for the ASIC Market

Maximizing ASIC Performance through Post-GDSII Backend Services


Podcast EP72: Analog AI/ML with Aspinity

Podcast EP72: Analog AI/ML with Aspinity
by Daniel Nenni on 04-20-2022 at 10:00 am

Dan is joined by Tom Doyle, CEO of Aspinity. Dan explores the benefits of Aspinity’s analog signal processing technology with Tom. The ultra-low power analog computing capability delivered by Aspinity has significant implications for the design and deployment of AI/ML systems.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Visual Debug for Formal Verification

Visual Debug for Formal Verification
by Steve Hoover on 04-20-2022 at 6:00 am

ThisIsFormal

Success with Open-Source Formal Verification

The dream of 100% confidence is compelling for silicon engineers. We all want that big red button to push that magically finds all of our bugs for us. Verification, after all, accounts for roughly two-thirds of logic design effort. Without that button, we have to create reference models, focused tests, random stimuli, checkers, coverage monitors, regression suites, etc.

Of course, there is no big red button, and I’d be crazy to suggest that we could abandon all of that work altogether. But, at the same time, that’s not far from what Akos Hadnagy and I did, several years ago, in developing the WARP-V CPU generator.

I wrote WARP-V initially to explore code generation using the emerging Transaction-Level Verilog language. I brought the model to life with a simple test program that summed numbers from 1 to 10. Then, Akos put RISC-V configurations of WARP-V through the wringer, as a student in Google Summer of Code, using the open-source RISC-V Formal framework. By completing formal verification (which has now also been done independently by Axiomise using different tools and checkers), we felt no inclination to bother with any of the standard RISC-V tools and compliance tests.

Formal Verification Hurdles

While our formal-focused approach helped eliminate a considerable amount of work, in other ways it did add some effort, too. Of course it did. If formal verification were a panacea, everyone would be taking this approach, and while formal verification has been around for a long time, it still struggles to attain first-class status in the verification landscape. This has little to do with the core science and everything to do with usability.

The first big leap in usability for formal verification came with the provision of counterexample traces. These let you debug formal failures much like simulation failures, using a waveform viewer. This, however, is not enough to put formal verification on a level playing field with dynamic verification. For one thing, simulations can produce log files in addition to waveforms. These provide high-level context about simulations to help with debugging. For aggressive use of formal verification, getting big picture context is important. Here’s why:

Traditionally, focused testing plays a major role in stabilizing the model. The myriad basic bugs are identified by focused tests, which are written for a specific purpose in a very controlled context. You know what they are doing. You know what to look for. Formal verification, however, will identify a counterexample that could be doing absolutely anything (within your constraints). Fortunately, the trace will be short, but formal tools have a way of finding really gnarly corners you would never expect or never be able to hit in a controlled fashion. That’s what’s so great about formal!

So if we’re going to find a significant portion of our bugs using formal methods, we’d better make it easier to figure out what’s going on in the counterexamples. That’s where visualization comes in.

Streamlining Debugging with Visualization

WARP-V utilizes the Visual Debug framework, now freely available to open-source projects in the Makerchip.com IDE. Visual Debug (or VIZ) makes it easy to define simulation visualizations. These aid in the debugging process of any digital circuit developed using any hardware description language and any design environment that produces industry-standard (.vcd) trace files. You may have seen screenshots of visualizations similar to those of WARP-V in various of my posts about my RISC-V CPU design courses, in which hundreds of students have developed their own RISC-V CPUs.

Using Visual Debug for the first time is like turning the lights on in a room you didn’t realize was dark. Just as you wouldn’t walk into a dark room to find your car keys without turning on the lights, you shouldn’t start debugging without first enabling Visual Debug. Though it wasn’t the case at the start of WARP-V development, as you’ve undoubtedly guessed by now, VIZ now works for formal counterexamples as well as it does for simulation.

Implications of Easier Debugging

Let’s put these visualization benefits in the context of WARP-V’s design methodology. This means, I first get to talk about the benefits of TL-Verilog–my favorite topic. Utilizing TL-Verilog, WARP-V is able to support different pipeline depths and even different instruction set architectures from the same codebase. And it is able to do so in less code (and correspondingly fewer bugs) than a single RTL-based CPU core. Furthermore, transaction-level design greatly simplifies the task of creating test harnesses to connect any RISC-V hardware configuration to the RISC-V Formal checkers. As described in “Verifying RISC-V in One Page of Code!”, the reduction in modeling effort across four different CPU configurations was arguably a factor of 70x or more! (These benefits would apply to test harnesses for dynamic verification as well.)

In the face of these TL-Verilog benefits, the effort to debug formal verification failures became a significant portion of the remaining work, and Visual Debug would have streamlined this effort. More generally, being able to easily decipher formal counterexamples can be the boost in productivity that tips the scales for formal verification. This, in turn, makes our resulting hardware more robust and secure. And security is quite possibly the biggest challenge faced by design teams today.

Visual Debug in Action

I leave you with a screen-capture, narrated by yours truly, demonstrating debugging of a register bypass (aka register forwarding) bug in WARP-V.

Related Links: Makerchip.comVisual DebugWARP-V CPU generatorRISC-V FormalRISC-V CPU design courses“Verifying RISC-V in One Page of Code!”


White Paper: Advanced SoC Debug with Multi-FPGA Prototyping

White Paper: Advanced SoC Debug with Multi-FPGA Prototyping
by Daniel Nenni on 04-19-2022 at 10:00 am

S2C EDA Prototyping White Paper 2022

S2C EDA recently released a whitepaper written by a good friend of mine Steve Walters. Steve and I have worked together many times throughout our careers and I consider him to be one of my trusted few, especially in regards to prototyping and emulation. Steve is also my co author on the book “Prototypical II The Practice of FPGA-Based Prototyping for SoC Design”. Prototypical II and this 10 page white paper are available on the S2C EDA website HERE.

Introduction

As SoC designs advance in complexity and performance, and software becomes more sophisticated and SoC-dependent, SoC designers face a relentless push to “shift left” the co-development of the SoC silicon and software to improve time-to-market.  Consequently, SoC verification has evolved to include multi-FPGA prototyping, and higher prototype performance, to support longer runs of the SoC design prototype, running more of its software, prior to silicon – in an effort to avoid the skyrocketing costs associated with silicon respins.  While FPGA prototyping for SoC design verification by its nature remains a “blunt instrument”, FPGA prototyping is still the only available pre-silicon verification option, beyond hardware emulation, for achieving longer periods of SoC design operation capable of running software, and, in some cases, “plugging” the SoC design prototype directly into real target-system hardware.  Not surprisingly, commercial FPGA prototype suppliers are using the latest FPGA technology to implement FPGA prototyping, offering multi-FPGA prototyping platforms, and advancing FPGA prototyping debug tool capabilities, to meet customer demands for more effective SoC verification.

Ideally, SoC design debug tools for FPGA prototyping would enable software simulation-like verification and debug at silicon speeds – providing visibility of all internal SoC design nodes, not impede prototype performance, provide unlimited debug trace-data storage, and be quickly reconfigurable for revisions to the SoC design and/or the debug setup.  In reality, today’s SoC design debug tools for FPGA prototyping falls short of the ideal, and multi-FPGA prototyping adds to the challenge of achieving ideal SoC design debug tool capabilities.  As a result, today’s FPGA prototyping for SoC design debug offers tradeoffs among the ideal debug tool capabilities, and it is left to the SoC design verification team to configure an “optimal” verification strategy for each SoC design project – with consideration for future scaling-up and improved verification capabilities.

This white paper reviews some of the multi-FPGA prototyping challenges for SoC design verification and debug, and, reviews one example of a commercially available multi-FPGA prototyping debug capability offered by S2C Inc., a leading supplier of FPGA prototyping solutions for SoC design verification and debug (s2ceda.com).

Summary and Conclusions

S2C’s MDM Pro hardware, together with S2C’s Prodigy FPGA prototyping platforms, and S2C’s Player Pro software, implements a rich set of debug features that provides SoC designers with the flexibility to optimize the FPGA prototype debug tools for a given FPGA prototyping project.  MDM Pro combines off-FPGA hardware for “deep” trace-data storage and complex hardware trigger logic, in combination with probe multiplexing IP in the FPGA to access a large number of debug probes over a few FPGA high-speed GTY connections to minimize the consumption of FPGA I/O, and the ability to setup more probe connections than need to be viewed at the same time so that more probes may be viewed when needed without recompiling the FPGA or degrading the debug performance.  Player Pro software for debug compliments the debug hardware with a powerful user interface for managing the debug setup, configuring advanced trace-data trigger conditions, initiating debug runs of the FPGA prototype, and viewing the debug trace-data from multiple FPGAs in a single viewing window.

Also read:

Prototype enables new synergy – how Artosyn helps their customers succeed

S2C’s FPGA Prototyping Solutions

DAC 2021 Wrap-up – S2C turns more than a few heads


SoC Application Usecase Capture For System Architecture Exploration

SoC Application Usecase Capture For System Architecture Exploration
by Sondrel on 04-19-2022 at 6:00 am

Fig 1

Sondrel is the trusted partner of choice for handling every stage of an IC’s creation. Its award-winning define and design ASIC consulting capability is fully complemented by its turnkey services to transform designs into tested, volume-packaged silicon chips. This single point of contact for the entire supply chain process ensures low risk and faster times to market. Headquartered in the UK, Sondrel supports customers around the world via its offices in China, India, Morocco and North America.

Introduction

Early in the SoC development cycle, Product Managers, Systems Architects and relevant technical stakeholders discuss and elaborate product requirements.  Each group tends to have a specific mental model of the product, typically with product managers focusing on the end-use and product applications. At the same time, Systems Architects focus on functionality and execution and implementation of the requirements.

The ‘Requirements Capture Phase’ identifies, formulates and records all known functionality and metrics, including performance in a clear and complete proposal. In addition, this exercise identifies functionality that is not fully understood or may be included later and seeks to determine and plan what tasks are required to complete the qualification and quantification of such functions.

On completion, or as complete as possible at the program’s start, the system architecture team’s requirements go through an analysis phase with appropriate inputs from design and implementation teams. The outcome of this iterative process is an architecture design specification that includes an architecture design for which all functionality, estimation of the power, performance and area are determined.

The inclusion of design and implementation effort at the initial phase ensures better accuracy and validation for the specification and architecture. In addition, it identifies the sensitivities needed to guide design choices.

The architecture analysis includes the architecture exploration, IP selection/specification, verification of requirements, and generation of the project execution plan with major tasks to be elaborated in later phases.

The architecture exploration of the candidate architecture is a significant component. It refines the architecture design by modelling the proposal and evaluating known or reference use cases, dynamically allowing the system topology to be defined and provisioning of resources to be allocated (memory, bus fabric data/control paths etc.).

While it allows aspects of the functionality to be evaluated and validated (connectivity, timing, performance etc.) for confidence in the correctness of the design, later phases using more detailed and accurate models are used to determine and correct potential errors during the implementation of the architecture.

The remaining sections of this article cover the use of modelling in the architecture phase of the program.

SoC application use case capture for system architecture exploration

The initial part of SoC Architecture Exploration is a rigorous way of capturing one or more application use cases and dataflows which an SoC is required to perform.  Accurate and complete description of use cases is necessary to communicate with stakeholders and agree on requirements early in the product definition phase.

The Systems Architect seeks to draw out the product requirements and express them so that technical and non-technical stakeholders can keep up with the product intent and architectural choices without excessive technical detail.

Figure 1 shows an overview of this collaboration process in 8 steps:

  1. Market analysis, industry trends, product requirements definition carried out by the Product Manager for a potential SoC solution
  2. Product Usecase requirements are communicated to the System Architect, usually by presentations, spreadsheets or documents.
  3. Requirements translation to DSL format required by modelling flows
  4. Tools generate an Executable Specification and visualisations of the use case
  5. Tools also generate the cycle-accurate SystemC model required for use case architecture exploration
  6. Systems architect inspects results of an exploration exercise and progressively converges to an optimal architecture for the SoC
  7. System Architect communicates findings with Product Manager
  8. The Product Manager may decide to modify requirements or collaborate with the Systems Architect to further refine the candidate SoC Architecture.

Industry trends show that vision-based applications are becoming more common to incorporate classical computer vision techniques and neural-net-based AI inferencing, with a fusion step to combine results from the two stages.

Figure 2 shows a typical autonomous vision use case data flow graph, with nodes representing processing functions and edges representing data flow.  The specific stages are:

  • Frame Exposure – The interval during which a camera sensor takes a snapshot of its field of vision. The image sensor may be configured in either global shutter or rolling shutter mode, and each mode has an exposure period associated with it.
  • Frame RX – The interval over which pixes of an image grouped in lines are sent to the SoC over a real-time interface such as MIPI CSI-3.
  • Image Conditioning – Any image pre-processing, filtering or summarisation steps performed on the received data before the actual compute stages.
  • Classical Computer Vision – Well-known vision processing algorithms, for example, camera calibration, motion estimation or homography operations for stereo vision.
  • Computational Imaging – Vision algorithms are augmented with custom processing steps such as Pixel Cloud or Depth Map estimation
  • AI Inferencing – Neural Net based image processing for semantic segmentation, object classification and the like.
  • Data Fusion – Final stage sensor fusion and tracking. May also include formatting or packetisation processing.
  • Data TX – Can be over PCIE or a real-time interface such as MIPI CSI-3 at a constant or variable data rate.

Associated with every processing stage are parameters that need to be specified so that the dynamic simulation model can be configured correctly.  These parameters generally describe:

  1. Read DMA characteristics: Number of blocks, block sizes, memory addresses and memory access patterns
  2. Processing characteristics: The delay which the task will require in order to perform its processing.
  3. Write DMA characteristics: Number of blocks, block sizes, memory addresses and memory access patterns

Figure 3 shows that this information is best described in tabular format, where rows represent processing tasks and columns are parameters associated with the task.

The use case graph may also have an embedded sub-graph, which is often the case with AI applications that describe the algorithm in terms of a Neural Network computation graph.  Figure 4 shows a sub-graph within a larger use case graph.  The method of describing the sub-graph is in the same tabular format, which may be present in any part of the larger graph, not just with AI processing.

Usecase parameters captured in tabular format as shown in Figure 3 are sufficient to describe the application intent regarding dataflows between processing stages and the processing delay of a given stage.  The added benefit of having the graph drawn to the left of the table is that it becomes intuitive to understand the data flow, hence the relationship between nodes as processing stages.  Even for large graphs, the method is applicable and offers supplementary information readily available if required.

Separate to the Application Usecase is a model of the Hardware Platform, which will perform the data transfers and processing delays as prescribed by the Usecase model.  The Hardware Platform model will typically have the following capabilities:

  1. Generate and initiate protocol compliant transactions to local memory, global memory or any IO device
  2. Simulate arbitration delays in all levels of a hierarchical interconnect
  3. Simulate memory access delays in a memory controller model as per the chosen JEDEC memory standard.

Figure 4 shows a block diagram of one such Hardware Platform, which, in addition to a simulation model, forms the basis for elaborating an SoC architecture specification.

So far we have defined two simulation constructs – the Application Usecase Model and the Hardware Platform Model.  What is required is now a specification of how the Usecase maps on to the Hardware Platform subsystems.  That is, which tasks of the application usecase model are run by which subsystems in the hardware platform model.  Figure 6 shows a the full simulation model with usecase tasks mapped on to subsystems of the hardware platform.

The Full System Model in Figure 6 is the dynamic performance model used for Usecase and Hardware Platform Exploration.

Every node in the Usecase graph is traversed during simulation, with the Subsystem master transactor generating and initiating memory transactions to one or more slave transactors. As a result, delays due to contention, pipeline stages or outstanding transactions are applied to every transaction, which cumulatively sums up the total duration that the task is active.

The temporal simulation view in Figure 7 shows the duration active for each task for a single traversal of the Application Usecase.  The duration for the entire chain is defined as the Usecase Latency.  Having one visualisation showing the Hardware Platform, Application Usecase and Temporal Simulation view often work very well for various stakeholders because it is intuitive to follow.

Now a single traversal is not useful, decides providing some sanity checks about the setup of the environment.  For thorough System Performance Exploration multiple traversals need to be run, and in this setup, we see the two phases of the simulation.  A transient phase is when the pipeline is filling up, followed by the steady-state when the pipeline is full; hence the system is at maximum contention.

Figure 8 highlights a portion of the simulation when the system is at maximum contention. During the steady-state, metrics are gathered to understand the performance characteristics and bounds of the system.  This guides further tuning and exploration of the use case and hardware platform.

Figure 9 shows two configurations of the hardware platform and the resulting temporal views.  One system is setup for low latency by using direct streaming interfaces to avoid data exchange in the DDR memory.

Yet again, the benefits of showing the two systems visually bring clarity so that all stakeholders can understand with a bit of guidance.

The complete architecture exploration methodology relates to use case and platform requirements, simulation metrics, key performance indicators and reports.

Figure 10 shows the flow of information in the following order:

  1. Application Usecase is defined first. The tabular format for capturing the use case is crucial here, as shown previously in Figure 3
  2. Usecase Requirements associated with the Application Usecase are stated.
  3. Usecase Requirements are converted into Key Performance Indicators, which are thresholds on metrics expected from simulation runs.
  4. Simulation metrics are collected from simulation runs
  5. Usecase performance summary report is produced by checking if metrics meet their Key Performance Indicators or not.

A similar flow applies to Hardware Platform Requirements whereby:

  1. Hardware Platform defined first
  2. Platform Requirements stated
  3. Platform KPIs extracted from Requirements
  4. Platform simulation metrics collected
  5. Platform performance summary generated by comparing metrics with KPIs.

Also read:

Sondrel explains the 10 steps to model and design a complex SoC

Build a Sophisticated Edge Processing ASIC FAST and EASY with Sondrel

Sondrel Creates a Unique Modelling Flow to Ensure Your ASIC Hits the Target