DAC2025 SemiWiki 800x100

Podcast EP106: SoC Verification Flows and Methodologies with Sivakumar P R of Maven Silicon

Podcast EP106: SoC Verification Flows and Methodologies with Sivakumar P R of Maven Silicon
by Daniel Nenni on 09-14-2022 at 10:00 am

Dan is joined by Sivakumar P R, the Founder and CEO of Maven Silicon. He is responsible for the company’s vision, overall strategy, business, and technology. He is also the Founder and CEO of ASIC Design Technologies.

Dan and Sivakumar discuss SoC Verification Flows and Methodologies based on his article published on SemiWiki. Dan explores the SoC verification flow starting with verification IP and why verification engineers need to understand electronic systems, and how it helps their long-term career goals.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Machine Learning in the Fab at #59DAC

Machine Learning in the Fab at #59DAC
by Daniel Payne on 09-14-2022 at 8:00 am

Virtual Metrology min

It used to be true that a foundry or fab would create a set of DRC files, provide them to designers, and then the process yield would be acceptable, however if the foundry knows more details about the physical implementation of IC designs then they can improve the yield. Using a digital twin of the design, process and metrology steps is a better methodology that understands the bidirectional information flow between the physical synthesis tool and fabrication steps.  A Machine Learning (ML) framework handles the bidirectional flow while using accurate predictive models, efficient algorithms, while running in the cloud. Ivan Kissiov of Siemens EDA presented on this topic at #59DAC, so I’ll distill what I learned in this blog.

A digital twin for the fab engineer is made up of process tools, metrology tools, a virtual metrology system along with a recipe.

Virtual Metrology – 2006 IEEE International Joint Conference on Neural Network Proceedings (pp. 5289-5293)

Models for the digital twin have to predict how new ICs will be manufactured, and its response to different processes and tool variations. Goals for this approach are improved yield at lower cost, better process monitoring, higher fault detection, and superior process control.

Some process effects are only seen at the lot level, while others show up at the wafer level, and some only appear at the design feature level, so being able to fuse all of this data together becomes a key task. With a digital twin a design can be extracted, along with process and metrology models to produce a predictive process model.

Digital Twin: Process Model

An example data fusion flow from AMD shows how post-process measurements for feed-forward control go into the process model, and how the equipment model returns a modified recipe, along with in-situ sensors providing automatic fault detection.

Source: Tom Sonderman, AMD

Data fusion ties into machine learning for each of the fab process and metrology steps:

Data fusion steps

Delving inside of the train, test, validate stage there is data processing, feature engineering, model training, and finally model quality metrics:

Path to model quality metrics

Statistical Process Control (SPC) has been used in fabs for decades now, and with some adjustments has been modified into Advanced Process Control (APC), where Run-to-Run Control (RtR) and Fault Detection and Classification components are shown below in a flow from Sonderman and Spanos:

Advanced Process Control

Complex deep learning models can analyze IC fab data using Shapley value explanation to evaluate input feature importance of a given model. During the feature engineering phase, the challenge is feature extraction from image, and Principle Component Analysis (PCA) is the feature extraction method used.

Fabs have used a data mining approach for many years now, and it’s the process of extracting useful information from a great amount of data. This data can help process engineers discover new patterns, gaining meaning and ideas to make improvements.

Data mining

Machine Learning in contrast is the process of finding algorithms that have improved results, derived from the data. With the ML approach, we now have machines learning without human intervention. The site MLOps has much detail on the virtues of using ML, and the flow used in fabs is shown below:

ML flow, source: MLOps

Summary

Data is king, and this truism applies to the torrents of data streaming out of fabs each minute of the day. As fabs and foundries adopt the digital twin paradigm in design, process and metrology, there is a bidirectional flow of information between the physical synthesis tool and and each step of the wafers going through a fab. Using a machine learning framework to create predictive models with efficient algorithms helps silicon yield in the fab.

Related Blogs


Ultra-efficient heterogeneous SoCs for Level 5 self-driving

Ultra-efficient heterogeneous SoCs for Level 5 self-driving
by Don Dingee on 09-14-2022 at 6:00 am

Ultra-efficient heterogeneous SoCs target the AI processing pipeline for Level 5 self-driving

The latest advanced driver-assistance systems (ADAS) like Mercedes’ Drive Pilot and Tesla’s FSD perform SAE Level 3 self-driving, with the driver ready to take back control if the vehicle calls for it. Reaching Level 5 – full, unconditional autonomy – means facing a new class of challenges unsolvable with existing technology or conventional approaches. From a silicon perspective, it requires SoCs to scale in performance, memory usage, interconnect, chip area, and power consumption. In a new white paper, neural network processing IP company Expedera envisions ultra-efficient heterogeneous SoCs for Level 5 self-driving solutions, increasing AI operations while decreasing power consumption in realizable solutions.

TOPS are only the start of the journey

Artificial intelligence (AI) technology is central to the self-driving discussion. Sensors, processing, and control elements must carefully coordinate every move a vehicle makes. But there’s a burning question: how much AI processing is needed to get to Level 5?

If you ask ten people, ten different answers come back, usually with something in common: it’s a big number. Until recently, conversations have been in TOPS, or trillions of operations per second. Some observers talk about Level 5 needing 3 or 4 POPS – peta operations per second. It may not sound like that big a deal since earlier this year, one SoC vendor announced a chip for self-driving applications with 1 POPS performance. They describe it as an “AI data center on wheels.” But, when asked what their power consumption is, they’re less forthcoming. Ditto for the transistor count or die size, probably massive.

These aren’t issues in a data center, but they are in a car. Every watt of power and pound of weight going into self-driving electronics cuts electric vehicle range, and bigger die sizes drive up wafer and package costs. Larger, more complex chip footprints often mean higher on-chip latency. Scaling AI inference TOPS without other improvements will soon run into a wall.

The self-driving processing pipeline workload

That’s not to say having more TOPS now doesn’t reveal helpful information about the compute workload. There are many unknowns – which sensor payloads provide better information, which AI models will perform best in the self-driving software stack, and what form it ultimately takes. Expedera’s white paper takes an in-depth look at the processing pipeline looking for answers, starting from a conceptual diagram.

Changes in the sensor package are ahead. There’s a debate around camera-only systems and whether they can detect all scenarios necessary to ensure safety. More sensors of different types and higher resolutions will likely appear and drive up processing requirements. In turn, more intensive AI models will be needed – and, in a fascinating observation from Expedera based on customer conversations, a self-driving processing pipeline may have ten, twenty, or more AI models operating concurrently.

Expedera expands on each of these phases in the white paper, looking at where compute-intensive tasks may lie. To deal with this self-driving workload, they anticipate a two- to three-order of magnitude increase in AI operations. At the same time, an order of magnitude decrease in power consumption (measured as thermal design power, or TDP) must occur for realizable implementations. According to Expedera, these combined effects are leaving GPUs in the dust when used as a tool for AI inference in vehicles.

Ultra-efficient heterogeneous AI inference for scale

What could take the place of a GPU in more efficient AI inference? A neural network processing unit (NPU) as part of ultra-efficient heterogeneous SoCs, after overcoming limitations of classical NPU hardware Expedera identifies. Scaling drives latency up and determinism down. Hardware utilization is low, maybe only 30 to 40%, driving area and power consumption up. Multi-model execution poses problems in scheduling and memory usage. And partitioning TOPS to fit workloads may not be possible within choices made in a custom SoC architecture.

Some themes Expedera sees in ultra-efficient heterogeneous SoC discussions with customers:

  • Fire-and-forget task scheduling is crucial, with a simple runtime where jobs start and finish predictably, and tasks can be reordered to fit the models and workload.
  • Independent, isolated AI inference engines are a must, where available TOPS are sliced into configurable pieces of processing to dedicate to groups of tasks.
  • Higher resolution, longer-range sensors generate more intermediate data in neural networks, which can oversubscribe DDR memory.
  • IP blocks that worked well in SoCs at lower performance levels prove unrealizable when scaled up – taking too much area, too much power, or both.

Expedera’s co-designed hardware and software neural network processing solution hits new levels of TOPS area density, some 2.7x better than their competition, and pushes hardware utilization as high as 90%. It also enables OEMs to differentiate SoCs and explore different AI models, avoiding risks of impacts from models changing and growing down the road to Level 5.

We’ll save more details of the solution and the discussion of ultra-efficient heterogeneous SoCs for Level 5 self-driving for the Expedera white paper itself – which you can download here:

The Road Ahead for SoCs in Self-Driving Vehicles


WEBINAR: Scalable, On-Demand (by the Minute) Verification to Reach Coverage Closure

WEBINAR: Scalable, On-Demand (by the Minute) Verification to Reach Coverage Closure
by Synopsys on 09-13-2022 at 10:00 am

Synopsys Verification Cloud Solutions

Verification has long been the most time-consuming and often resource-intensive part of chip development. Building out the infrastructure to tackle verification can be a costly endeavor, however. Emerging and even well-established semiconductor companies must weigh the Cost-of-Results (COR) against Time-to-Results (TTR) and Quality-of-Results (QOR).

The Synopsys Cloud Verification Instance is the first scalable, on-demand verification solution. Emerging companies can kick-start their verification with pre-configured flows implemented by the minute. Organizations with a verification environment already in place can quickly scale their verification when additional computation power is needed. These ready-to-use and automated verification flows reduce the manual and often error-prone verification effort to reach coverage closure quickly and increase design quality.

Attendees will walk away with an understanding of how Synopsys Cloud Verification Instance can be easily deployed to meet verification challenges associated with COR, TTR, and QOR to achieve confidence in coverage closure.

REGISTER NOW

Speakers

Listed below are the industry leaders scheduled to speak.

Sridhar Panchapakesan

Sr. Director, Cloud Engagements
Synopsys

Sridhar Panchapakesan is the Senior Director, Cloud Engagements at Synopsys, responsible for enabling customers to successfully adopt cloud solutions for their EDA workflows. He drives cloud-centric initiatives, marketing, and collaboration efforts with foundry partners, cloud vendors, and strategic customers at Synopsys. He has 25+ years of experience in the EDA industry and is especially skilled in managing and driving business-critical engagements with top-tier customers. He has an MBA degree from the Haas School of Business, UC Berkeley, and an MSEE from the University of Houston.

Rob van Blommestein

Product Marketing Manager, Sr. Staff
Synopsys

Rob van Blommestein is the product marketing manager for the Verdi Automated Debug System at Synopsys. With over 20 years of experience in marketing verification products from startups to large-scale companies, he is a marketing executive with a demonstrated history of success establishing brands and growing business in a variety of sectors including electronic design automation (EDA), FPGA, high-performance computing, IoT, machine learning, and artificial intelligence (AI).

REGISTER NOW

Faster, Earlier System Verification

With designs at advanced nodes or those approaching reticle limits, it’s more of an imperative to verify comprehensively and thoroughly across all stages of the design flow. When in-house compute resources are limited, the cloud provides a welcome advantage. Synopsys cloud-based verification solutions can accelerate software bring-up and system validation while leveraging the scalability and fine-grained parallelism technology of the Verification Continuum® Platform, including the security and flexibility that the hosted ZeBu® Cloud solution provides.

Reduce Simulation Turnaround Time

Exhaustive functional verification calls for high-performance simulation and constraint solver engines. That’s what you get with Synopsys VCS® simulation solution in the cloud. Speed up high-activity, long-cycle tests by allocating more cores when needed. Leverage seamless data-sharing between simulations by using VCS containers.​

With our functional verification flow, you’ll benefit from verification planning, coverage analysis, and closure solutions, as well as native integration with the Synopsys Verdi® de-facto debug environment and access to industry-first verification IP (VIP) for the latest protocols and memory models.​

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also Read:

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs


Connecting SystemC to SystemVerilog

Connecting SystemC to SystemVerilog
by Bernard Murphy on 09-13-2022 at 6:00 am

UVM Connect

Siemens EDA is clearly on a mission to help verifiers get more out of their tools and methodologies. Recently they published a white paper on UVM polymorphism. Now they have followed with a paper on using UVM Connect, re-introducing how to connect between SystemC and SystemVerilog. I’m often mystified by seemingly overlapping or adjacent efforts between verification capabilities and standards, here in support of co-simulation. My contribution in this article (I hope) is to resolve my own confusion and to answer why this problem is important. I’ll leave the Siemens EDA white paper to handle the details.

Groping through the fog

UVM Connect sounds like it would be a feature of UVM or UVM-SystemC, right? Wrong. UVM Connect is an independent open-source UVM-based library from Siemens EDA, introduced in 2012, enabling TLM communication between UVM/SystemVerilog and SystemC. Conversely, the UVM-SystemC Library 1.0-beta4 was released very recently. However, UVM-SystemC does not support language mixing (as of the current beta release). On the other hand, Siemens EDA is very clear that UVM Connect will continue to be valuable even in the presence of UVM SystemC.

Like I said, confusing. There are areas of apparent overlap, but maybe that overlap isn’t important. UVM Connect is an extension to the UVM standard, invented long before UVM-SystemC, to solve a real problem. Will that solution continue to be relevant? Based on the Siemens-EDA white paper it seems the answer is yes, whatever may happen to UVM-SystemC. Maybe UVM and UVM-SystemC will eventually settle into one standard. In which case I would expect the functionality of UVM Connect to be absorbed in some manner.

Why connect SystemC and SystemVerilog?

Architectural designers work in SystemC (or C/C++). Implementation designers work in Verilog – SystemVerilog if they are designing testbenches. How do they check/debug the implementation testbench? Ideally by running the architectural model under that testbench. How do they check the implementation model matches the architectural model? Through co-simulation, requiring that they run and compare the SystemC model and the implementation model under the UVM testbench. Both methods can benefit from UVM Connect, connecting the SystemC model to the UVM/SystemVerilog environment and vice-versa.

Equally, having that connection allows verification to use both RTL-based and SystemC-based VIP, expanding and accelerating testbench development. Also some might argue this capability enables UVM to stretch up system level verification. Allowing constrained random tests generated in UVM to be applied to SystemC models. Today, I think that is more of a PSS domain, but the UVM Connect approach certainly works in principle.

Why not use DPI?

Isn’t this getting a little too complicated? SystemVerilog provides a Direct Programing interface (DPI). DPI offers a standard way to connect between SV and C++. Since SystemC is C++, a solution exists; why add another solution? My guess is that the DPI approach is a bit too low level for many of these applications and solutions will invariably be non-portable. In contrast, transaction level modeling (TLM) is a well-established paradigm for handling data exchange between different domains. SystemC is intrinsically TLM-based and UVM provides TLM communication interfaces. UVM Connect simply formalizes this connection in a nice, easy to use way.

My takeaway? UVM Connect is a practical way to connect SystemC models into a UVM testbench in support of implementation verification. Certainly much easier than DPI. More ambitious goals to blend UVM with SystemC and perhaps SystemVerilog may be the long term goal but are not an answer to today’s needs. You can learn more about UVM Connect HERE.


Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model

Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model
by Kalar Rajendiran on 09-12-2022 at 10:00 am

CXL Block Diagram

The tremendous amount of data generated by AI/ML driven applications and other hyperscale computing applications have forced the age old server architecture to change. The new architecture is driven by the resource disaggregation paradigm, wherein memory and storage are decoupled from the host CPU and managed independently through high-speed connectivity. The Compute Express Link (CXL) standard is a direct result of this evolution in the server architecture to support high-speed, low-latency, cache-coherent interconnect. The CXL specification delivers high-performance, while leveraging PCI Express® technology to support rapid adoption. CXL switching features resource pooling, enabling the host CPU to access one or more devices from the resources pool. While CXL 2.0 specification (CXL2) supports single-level switching, CXL 3.0 specification (CXL3) supports multi-level switching wherein the host CPU could leverage different resources in a tiered fashion. CXL3 also introduces fabric capabilities and management, improved memory sharing and pooling, enhanced coherency, and peer-to-peer communication. The spec also doubles the data rate to 64GT/s with no added latency over CXL2.

The specification is also evolving fast, with CXL3 released just three years after CXL1 was released. Truechip has a long track record of offering VIP solutions to a broad list of customers worldwide. It offers an extensive portfolio of VIP solutions to verify IP components interfacing with industry-standard protocols integrated into ASICs, FPGAs and SoCs. As a Verification IP Specialist, Truechip has been offering VIP solutions to support the CXL standard right from the start. For details on their entire portfolio of VIP offerings, visit the products page.  They recently expanded their portfolio with the addition of CXL3 and CXL Switch VIP solutions. You can read their press announcement about first customer shipment of CXL 3 verification IP and CXL switch model.

Truechip’s CXL3 VIP Solution

Truechip’s CXL Verification IP provides an effective and efficient way to verify the components interfacing with CXL connectivity of an IP or SoC. Their CXL VIP is fully compliant with the latest CXL specification. This solution is light weight with an easy plug-and-play interface so that there is no impact on the design cycle time. The solution is offered in native System Verilog (UVM/OVM/VMM) and Verilog.

The following Figure depicts a block diagram of the Truechip’s CXL3 VIP environment.

Some Salient Features

  • Configurable as CXL Host and Device when operating in Flex Bus mode
  • Configurable as PCI Express Root Complex and Device Endpoint when operating in PCIe mode
  • Supports 64.0 GT/s Data Rate with backward compatibility
  • Supports Pipe Specification 6.1.1 with both Low Pin Count and Serdes Architecture
  • Supports Configurable timeout for all three layers
  • Supports different CXL/PCIe Resets
  • Supports Arbitration among the CXL.IO, CXL.cache and CXL.mem packets with interleaving of traffic between different CXL protocols
  • Offers a comprehensive user API for callbacks
  • Provides built-in Coverage analysis
  • Supports all 3 coherency models HDM-D, HDM-H and HDM-DB to access HDM memory

Deliverables

CXL Host/Device

CXL BFM/Agents for:

    • Host and Device sequences
    • Transaction layer (CXL.IO and CXL.cache, CXL.mem)
    • Link layer (CXL.IO and CXL.cache, CXL.mem)
    • Arbiter/Mux layer
    • Phy layer

CXL Monitor and Scoreboard

Test Environment & Test Suite:

    • Basic and Directed Protocol Tests
    • Random Tests Error Scenario Tests
    • Cover Point Tests
    • Compliance Tests

Integration Guide, User Manual, Quick start Guide, FAQs and Release Notes

Truechip’s CXL Switch Model

Truechip’s CXL Verification VIP provides an effective & efficient way to verify the components interfacing with the CXL Switch interface of an IP or SoC. Truechip’s CXL Switch model is fully compliant with the latest CXL specification. The model supports Hot Add and Hot Remove for a CXL Device and is available in native System Verilog (UVM/OVM/VMM) and Verilog.

The following Figure depicts a block diagram of the CXL3 VIP environment when the system implementation incorporates the switching capability.

Aspects Common to All of Truechip’s VIP Solutions

Although covered in an earlier blog, it is worth to reiterate some advantages that cut across all of Truechip’s VIP solutions. All solutions come with an easy plug and-play interface to enable a rapid development cycle. The VIPs are highly configurable by the user to suit the verification environment. They also support a variety of error injection scenarios to help stress test the device under test (DUT). Their comprehensive documentation includes user guides for various scenarios of VIP/DUT integration. Truechip’s VIP solutions work with all industry-leading dynamic and formal verification simulators. The solutions also include Assertions that can be used in formal and dynamic verification as well as with emulations. And, their solutions come with the TruEYE GUI-based tool that makes debugging very easy. This patented debugging tool reduces debugging time by up to 50%.

For more information, refer to Truechip’s website.

Also Read:

Truechip’s Network-on-Chip (NoC) Silicon IP

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution

Bringing PCIe Gen 6 Devices to Market


WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken
by Daniel Nenni on 09-12-2022 at 6:00 am

Interlaken Blog Post Graphic

Way back in the early 2000s when XAUI was falling short on link flexibility a search for an alternative chip-to-chip data transfer interface with SPI like features lead Cisco Systems and Cortina System to put forward the proposal for the Interlaken standard. The new standard married the best of XAUI’s serialized data and SPI’s flow control capabilities. To this day the continuous growth in data consumption is driving demand for higher speeds, but also lower power-per-bit equating to lower cost-per-bit. Reliability is, of course, also a key requirement. Fortunately, ongoing developments and extensions to the Interlaken Standard allow it to continue to be up to the challenge of current times high bandwidth links. Interlaken has found its way into Applications involving HPC (High Performance Computing), Telecommunications, Data Center NPUs (Networking Processing Units), Traffic Management, Switch Fabrics, TCAMs (Ternary Content Addressable Memories) as well as Serial Memories.

Watch the Replay HERE

Interlaken Operates on packeted data allowing for multiple logical channels to share a common set of High-speed lanes. The data rates on the logical channels can vary which allows for mixing high-speed high throughput data sources with sparsely transmitting occasional usage sources over a shared set of physical lanes. This paired together with rate matching and flow control mechanisms allow for an extremely flexible interface from the perspective of link sharing and data mapping. Data packets can be interleaved such that large packets do not block the link, allowing to balance of the transmission between multiple channels or giving priority to urgent control packets.

Data Integrity and Reliability are achieved with multiple levels of CRC based Error Detection as well as the RS-FEC based Error Correction capabilities. The RS FEC error correction mechanism has been introduced in 2016 as an extension of the standard to address the high BER (bit error rates) of PAM4 links. In case an error occurs, the Retransmit extension from 2010 allows the standard to handle the situation without involving the upper control layers. In this situation the Out of Bound Flow Control interface will be used to request the Transmitter to retransmit data from its internal buffer, to allow for the RX to pick up the data stream at the point the error has been detected and resolve the error.

Other extensions to the interface include the Dual Calendar Extension from 2014, and the Look Aside Extension from 2008. The Dual Calendar allows the addition and removal of channels or change channel priority during operation. Examples of use cases would be for Online Insertion or Removal (OIR) of interfaces or possibly for dynamic re-provisioning of channel bandwidth. The Look Aside Extension defines a lightweight, alternative version of the standard, to facilitate interoperability between a data path device and a look-aside co-processor. It is suitable for short, transaction-related transfers and since it is not directly compatible with Interlaken it should be considered a different operational mode.

Watch the Replay HERE

About Comcores

Comcores is a Key supplier of digital IP Cores and solutions for digital subsystems with a focus on Ethernet Solutions, Wireless Fronthaul and C-RAN, and Chip to Chip Interfaces. Comcores’ mission is to provide best-in-class, state-of-the-art, quality components and solutions to ASIC, FPGA, and System vendors. Thereby drastically reducing their product cost, risk, and time to market. Our long-term background in building communication protocols, ASIC development, wireless networks and digital radio systems has brought a solid foundation for understanding the complex requirements of modern communication tasks. This know-how is used to define and build state-of-the-art, high-quality products used in communication networks.

To learn more about this solution from Comcores, please contact us at sales@comcores.com or visit www.comcores.com

Also Read:

CEO Interview: John Mortensen of Comcores

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

LIVE Webinar: Bridging Analog and Digital worlds at high speed with the JESD204 serial interface


GM Buyouts: Let’s Get Small!

GM Buyouts: Let’s Get Small!
by Roger C. Lanctot on 09-11-2022 at 6:00 am

GM Buyouts Lets Get Small

“I sell new cars to legitimize my used car business.”  – Wes Lutz, president Extreme Chrysler/Dodge/Jeep, RAM Inc., Jackson, Mich. National Automobile Dealer Association board member

Since taking over at General Motors, CEO Mary Barra has made many radical adjustments in the company’s international footing in the interest of setting the stage for and investing in an electrified future. The company has committed to the creation of multiple large scale Ultium battery manufacturing facilities throughout the U.S. and trumpeted plans for the electrification of the entire GM lineup.

In the process, GM has exited multiple overseas markets including Europe, Russia, Thailand, India, Australia, South Africa, and New Zealand (maintaining some export business in some). Domestically, the company has been rationalizing its North American dealer network.

Two years ago, GM offered buyouts to Cadillac dealers that were unwilling to make six-figure investments in maintenance facility upgrades, charging stations, and employee training in advance of the arrival of the Cadillac Lyriq EV. According to dealer consultant Steve Greenfield, the Cadillac buyouts ranged from $300,000 to $500,000 vs. required investments of $200,000

The Cadillac buyouts reduced the U.S. dealer base by about a third. GM asserted that dealers mainly located in rural areas or non-EV-oriented markets were the focus. (Wouldn’t it be ironic if those bought out Cadillac dealers turned around and added Vinfast or Polestar franchises?)

Now news arrives that Buick dealers are on the chopping block, so to speak, As in the case of Cadillac, GM expects that one third of Buick dealers, like former Cadillac dealers, in rural or non-EV-inclined markets are likely to sever their ties with the brand rather than invest in selling and servicing an EV-only Buick offering expected to take effect in 2024.

The cognitive dissonance of GM’s enthusiastic embrace of EV technology driving an ongoing contraction of GM’s global and domestic vehicle distribution network is extraordinary. Even before these reductions, GM signaled its anticipated departure from sedan segments encompassing such vaunted models as the Malibu, Impala, Cruze, and Regal.

A narrowed lineup of vehicles sold in fewer stores in fewer markets hardly seems to be a recipe for success. The first move was the reduction in the variety of vehicles, which could clearly be seen as a savvy strategy to focus development on a narrower range.

This move made a lot of sense and looks prescient in view of the post-COVID world characterized by troubled supply chains and chip shortages. It also makes sense in the context of a range of EV startups able to focus all of their marketing and sales efforts on one or two vehicles.

The global pullback, too, could be seen as wise. GM was arguably over-extended with limited growth prospects. Subsequent events have borne out the wisdom of these multiple global market departures – especially exits from Russia and Europe, now engulfed in political turmoil and a fuel crisis.

But parting company with one third of an already shrinking dealer base seems uniquely ill-timed. Of course, the key issue is the appearance of GM buying dealers out of their franchises – and for so little! Given the current demand for automobiles generally and EVs, in particular, one might expect dealers to be clinging to their cherished OEM relationships.

In fact, given the importance of EVs to GM’s future one might expect GM to subsidize the needed dealer upgrades. The reduction in Cadillac dealers took the number of locations from 900 down to 565. Buick begins the process with 2,000 dealers.

With Tesla currently boasting approximately 120 service centers in the U.S., a clear picture begins to emerge of a legacy auto maker cutting back distribution (and service) infrastructure while an emerging rival is adding sales and service resources.

GM is on solid ground cutting back on dealers. The conventional wisdom in the industry has long been that there are simply too many new car dealers in the U.S.  Not surprisingly, those numbers have been steadily falling.

Most dealers have seen per-store sales decline, a reversal recently fueled by current vehicle shortages. Yet profits are up along with vehicle prices and markups.

Investors continue to view new car dealers as solid investments with dealer acquisitions on the rise – reflecting an evolving consolidation of distribution. Reducing the number of Buick and Cadillac dealers certainly enhances the value of the dealers that remain in the fold – but a thinning of the dealer ranks will make reaching consumers more problematic.

GM dealers as a group do not make the top ten list of average number of vehicles sold per dealership. Those rankings are dominated today by import makes. Maybe a shorter roster of dealers will improve per-dealer throughput – or maybe it will further erode sales and market share.

It is troubling that GM has determined that it can’t “sell” its own dealers on the prospect of selling EVs. Consumers are lining up to place deposits on new EVs soon-to-be arriving from every make in the market – while hundreds of Cadillac and, soon, Buick dealers are saying: “No thanks.”

In the end, I have to look at GM’s decision from a personal perspective. For nearly every import brand sold in the U.S., I can think of multiple dealer locations that exist within a short distance from my home. When I think of Buick or Cadillac and search for their nearby sales locations, I am looking at a half hour drive or more.

GM’s decisions are clearly financially motivated. The company is marshalling its resources to sell a greatly shortened lineup of vehicles through a diminished network of dealers in a resource constrained market plagued by chip shortages and supply chain snags.

Rather than rallying its retail partners for the coming transformation to new powertrain technology, GM is paying dealers approximately twice as much to quit as it would be asking them to invest to take on the new challenge.

GM is left with a diminished market presence – fewer car models, fewer dealers, fewer overseas markets – and an ever-expanding competitive set of imports and startups. GM literally appears to be self-strangling its way to greater profitability. At the very least a reduction in the size of GM’s dealer network on the eve of massive EV launches sends an ominous message for consumers and investors – and maybe dealers.

Also Read:

MAB: The Future of Radio is Here

GM: Where an Option is Not an Option

C-V2X: Talking Cars: Toil & Trouble


Podcast: Intel’s RISC-V Ecosystem initiative with Junko Yoshida

Podcast: Intel’s RISC-V Ecosystem initiative with Junko Yoshida
by Daniel Nenni on 09-09-2022 at 10:00 am

Welcome to our Podcast on Intel’s RISC-V Ecosystem initiative. I’m Junko Yoshida, Editor in Chief‌ of the Ojo-Yoshida Report. Joining me today to discuss the topic are Vijay Krishnan, general manager of RISC-V Ventures at Intel Corp. and Emerson Hsiao, chief operating officer of Andes Technologies USA

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Jan Peter Berns from Hyperstone

CEO Interview: Jan Peter Berns from Hyperstone
by Daniel Nenni on 09-09-2022 at 6:00 am

DSCF07001

Since 2012, Dr. Jan Peter Berns is the CEO of Hyperstone, a producer of Flash Memory Controllers for Industrially Embedded Storage Solutions. Before that, he held a Senior Manager Position at Toshiba Electronics for several years. Jan Peter brings more than 20 years of management and executive experience in the semiconductor and electronics market.

Hyperstone was founded in 1990 by the German computer pioneer Otto Müller. After selling his previous company “Computertechnik Müller (CTM)” to the Diehl group, he carved together a small team and developed a 32-Bit RISC processor. Starting in 1990, Hyperstone marketed the processor core first as a silicon IP-block later also as a general-purpose microprocessor chip. In 1996 the design was enhanced to become an efficient architectural combination of RISC and DSP making it perfect for the emerging Digital Camera boom. In this context, one of the licensees was Lucky Goldstar, today better known as LG Electronics. LG requested Hyperstone develop a NAND flash controller chip with accompanying firmware. This moment in time was the inception of the company’s product focus today.

What problems are Hyperstone addressing?
NAND flash is inherently unreliable at storing data. Higher densities and complex 3D structures have made data storage exceedingly complex over the last decade. Bit errors, cell wear and tear, deteriorating data retention and read disturbs are just some of the physical effects that need to be mitigated to ensure data can be stored efficiently. Achieving the highest levels of reliability and security, the lowest field failure rates and the best functional safety is Hyperstone’s mission.

What are the solutions proposed by Hyperstone?
Hyperstone is working closely with flash vendors globally to understand the growing complexities of NAND flash failure modes that have come hand in hand with higher density flashes. These insights allow Hyperstone to push the boundaries of reliable NAND flash storage and is one of the many building blocks that enable the company’s controllers to turn NAND flash into storage that lives up to the most critical requirements.

Why should someone choose Hyperstone?
While consumer grade storage is sufficient in cameras, tablets and mobile devices, industrial applications have higher, more intricate demands regarding reliability, availability, and data integrity. This is Hyperstone’s expertise – the insights and experience in designing for industrial grade storage and supporting customers globally throughout the entire design process. From telecom base-band stations to automotive systems to IoT devices and robots operating in the industrial automation setting, Hyperstone has the knowledge to identify the unique demands of any use case and design to its specifications.

Which markets are Hyperstone targeting?
Hyperstone currently serves a range of industrial markets. The strengths of the companies R&D lie in embedded security and reliability, two tenets that gear Hyperstone towards the industrial IoT, security, and automotive markets. The company also supports and has significant experience in industrial automation, telecommunications, the energy sector, medical and transport applications.

What customer problems have you solved thus far?
A lack of use case understanding is the most common and critical issue the company has identified over the years. The assumption that one size fits all, especially for industrial grade applications is the root cause for customers experiencing issues with their storage solutions.

When partnering with Hyperstone on a project, the first question asked is what are the demands of the storage system at hand? By identifying these unique demands, Hyperstone can optimize the flash controller to best support the requirements of the system. At the end of the day, there is no single problem solved; the company optimizes solutions for specific use case demands.

What do the next 12 months have in store for Hyperstone?
Within the next year, Hyperstone will be launching a new SD controller and achieve significant development milestones towards the next eMMC controller. Demand for the company’s products and reliable NAND storage is surging.

Which markets do you feel offer the best opportunities for Hyperstone over the next few years and why?
The companies’ support for all industrial markets worldwide won’t change. Hyperstone does, however, acknowledge the growing demand for reliable and secure storage in the automotive, industrial IoT and security arenas. These are three markets where Hyperstone’s key differentiator’s, reliability and security are crucial.

How is Hyperstone responding to the current semiconductor shortage, especially in the European market?
The current climate of the semiconductor industry has impacted the world. While the company can’t avoid it entirely, Hyperstone has long taken measures to ensure supply shortages are managed as swiftly as possible. This includes buying wafer allocation and tester slots as well as qualifying new substrate and test-service suppliers.

Hyperstone has long established second sourcing for critical process steps to react flexibly to any shortages. To ensure the company’s quality expectations are not compromised, Hyperstone has also accepted increase pricing of major parts. At the end of the day, a strong relationship with suppliers and service providers has ensured the company’s success in these tumultuous times.

And how has the pandemic affected Hyperstone and its customers?
The pandemic impacted Hyperstone’s customer base in unique ways. Different markets experienced different levels of shortages and demands were very volatile as well. While the medical manufacturing market showed significant growth, other markets like the automotive industry were hit hard by the pandemic or supply chain disruptions. Hyperstone was well positioned to benefit from growing markets like medical, 5G and security.

Last question, what is Hyperstone’s future roadmap and direction?
Hyperstone has been growing its R&D teams significantly in the last two years and will continue to do so. The company will expand its portfolio of memory controllers and offer a comprehensive line of storage solutions for industrial and automotive applications. Another major strategic focus is going to be on IoT and security applications.

About Hyperstone
Pioneers in the NAND flash memory controller business, at Hyperstone we design and develop highly reliable, robust controllers for industrial and embedded NAND flash-based storage solutions. We pride ourselves on developing innovative solutions, which enable our customers to produce world-class products for global data storage applications. Our flash memory controller portfolio supports a range of interfaces and form factors including SecureDigital (SD) cards, microSD, USB flash drives, Compact Flash (CF) cards, Serial ATA (SATA) and Parallel ATA (PATA) SSDs, Disk-on-Module (DoM) and Disk-on-Board (DoB) solutions as well as embedded flash solutions such as eMMC.

Also read:

Selecting a flash controller for storage reliability