100X800 Banner (1)

Coherency in Heterogeneous Designs

Coherency in Heterogeneous Designs
by Bernard Murphy on 09-01-2022 at 6:00 am

Ncore application

Ever wonder why coherent networks are needed beyond server design? The value of cache coherence in a multi-core or many-core server is now well understood. Software developers want to write multi-threaded programs for such systems and expect well-defined behavior when accessing common memory locations. They reasonably expect the same programming model to extend to heterogeneous SoCs, not just for the CPU cluster in the SoC but more generally. Say in a surveillance application based on an intelligent image processing pipeline feeding into inferencing and recognition to detect abnormal activity. Stages in such pipelines share data which must remain coherent. However, these components interface through AMBA CHI, ACE and AXI protocols – a mix of coherent and incoherent interfaces. The Arteris IP Ncore network is the only coherent network interface that can accomplish this objective.

Coherency in a Heterogeneous Design

An application like a surveillance camera depends on high-performance streaming all the way through the pipeline. A suspicious figure may be in-frame only for a short time, yet you still must capture and recognize a cause for concern. The frames/second inference rate must be high enough to meet that goal, enabling camera tracking to follow the detected figure.

The imaging pipeline starts with the CPU cluster processing images, tiled through multiple parallel threads for maximum performance. Further maximizing performance, memory accesses are cached to the greatest extent possible, and therefore that cache network must be coherent. The CPUs support CHI interfaces, so far, so good.

But image signal processing is a lot more complex than just reading images. There’s demosaicing, color management, dynamic range management and much more. Maybe handled in a specialized GPU or DSP function, which must deliver the same caching performance boost to not slow down the pipeline. And for which thread programming expects the same memory consistency model. Often this hardware function only supports an ACE interface. ACE is coherent but different from CHI. Now the design needs a coherent network that can support both.

Those threads feed into the AI engine to infer suspicious objects in images at, say, 30 frames/second. Aiming to detect not only such an object but also the direction of movement. AI engines commonly support an AXI interface, which is widely popular but is not coherent. However, the control front-end to that engine must still see a coherent view of processed image tiles streaming into the engine. Meeting that goal requires special support.

The Arteris IP Ncore coherent network

The Arteris IP FlexNoC non-coherent network serves the connectivity needs of much of a typical SoC, which may not need coherent memory sharing with CPUs and GPUs. The AI accelerator itself may be built on a FlexNoC network. But a connectivity solution is needed to manage the coherent domain as well. For this, Arteris IP has built its Ncore coherent NoC generator.

Think of Ncore as a NoC with all the regular advantages of such a network but with a couple of extra features. First, the network provides directory-based coherency management. All memory accesses within the coherent domain, such as CPU and GPU clusters, adhere to the consistency model. Second, Ncore supports CHI and ACE interfaces. It also supports ACE-Lite interfaces with embedded cache, which Arteris IP calls proxy caches. A proxy cache can connect to an AXI bus in the non-coherent domain, complementing the AXI data on the coherent side with the information required to meet the ACE-Lite specification. A proxy cache ensures that when the non-coherent domain reads from the cache or writes to the cache, those transactions will be managed coherently.

Bottom line, using Ncore provides the only commercial solution for network coherency between CHI, ACE and AXI networks. The kind of networks you will commonly find in most SoCs. If you’d like to learn more, click HERE.


Podcast EP104: Enabling Future Innovation with GBT Technologies

Podcast EP104: Enabling Future Innovation with GBT Technologies
by Daniel Nenni on 08-31-2022 at 10:00 am

Dan is joined by Dr. Danny Rittman, CTO of GBT Technologies. Danny has an extensive background in the R&D space and has been working for companies such as Intel, IBM, and Qualcomm. He has spent most of his career researching and inventing processor chips, as well as paving the way for futuristic AI software programs that can be successfully used in a vast number of industries.

Dan explores the broad portfolio of GBT Technologies with Danny – its IP, architectures, design tools and application areas. The impact of this technology portfolio is discussed in some detail.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

 


Five Key Workflows For 3D IC Packaging Success

Five Key Workflows For 3D IC Packaging Success
by Kalar Rajendiran on 08-31-2022 at 6:00 am

3D IC design workflows

An earlier blog started with the topic of delivering 3D IC innovations faster. The blog covered the following foundational enablers for successful heterogeneous 3D IC implementation.

  • System Co-Optimization (STCO) approach
  • Transition from design-based to systems-based optimization
  • Expanding the supply chain and tool ecosystem
  • Balancing design resources across multiple domains
  • Tighter integration of the various teams

The UCI-Express standard is driving the adoption of heterogeneous chiplets integration and with it the adoption of 3D IC implementations. When a new capability gets ready for mainstream, its mass adoption success depends on a number of things. While the foundational enablers are important, they are not sufficient for easily, quickly, effectively and efficiently delivering a successful solution. Standardized protocols are needed to offer plug-and-play compatibility between chiplet suppliers. With new requirements and signoffs, design tools evolve to meet and resolve any new challenges that arise. What the mainstream users need is a way to best use the tools to get an edge in the competitive marketplace. That boils down to key workflows which is the topic of a recent whitepaper published by Siemens EDA. This blog will cover the salient points from that whitepaper.

Workflow Adoption

There are two approaches to a chiplet based design. One approach uses a process of disaggregation wherein a complex monolithic chip is decomposed into plug-and-play modules to be assembled and interconnected with a silicon interposer. The other approach uses general purpose building block chiplets that are assembled and interconnected with an ASIC to build the system. Whichever approach is adopted, chiplet based designs add levels of complexity that must be understood and planned for.

The following five workflow adoption focus areas lend themselves to a managed methodology process that minimizes risk and cost and accelerates time to market benefit.

Early Planning and Predictive Analysis

Early planning and predictive analysis of the complete package assembly is mandatory with heterogeneous chiplets based systems. This involves thermal and power delivery considerations, chiplet co-optimization, chiplet interface management and design.

Chiplets often introduce new connectivity structures such as 3D stacking, TSVs, bumps, hybrid bonding and copper pillars. These structures can cause thermal induced stresses leading to performance and reliability problems. Investigating the connectivity structures for available alternate material options can help prevent unexpected failures that require late stage design changes.

Predictive power delivery analysis should be performed early in the process. Even though it may be approximate, this analysis averts later stage layout issues. A typical approach is to approximate the percentage of metal coverage per routing layer using Monte Carlo type sweep analysis. This analysis helps identify and communicate to the layout team, the parts of the circuit that would have the greatest impact on performance.

Waiting until the chiplet bump array is fully defined will delay package planning and limit the ability for co-optimization. The package planning can begin even before the chiplet design has started. The chiplet’s bump array and signal assignments can be created at the interposer level and passed to the IC design team for iterating, as the design progresses.

Standardized interfaces and protocols are key for broad adoption of chiplets based designs. At the same time, describing these interfaces brings new challenges for designers. Current approaches such as schematic description or HDL coding introduce the risk of human introduced errors. To overcome this challenge, Siemens EDA has developed a new approach called interface-based design that lends itself to automation.

Automating Interface-Based Design

Interface based design (IBD) is a new approach to capturing, designing, and managing large numbers of complex interfaces that interconnect multiple chiplets. With an interface defined as an IBD object, the designer can focus on a higher level of connectivity abstraction. The interface description becomes part of the chiplet part model. When a designer places an instance of this chiplet, everything related to the interface is automatically put in place. This approach allows designers to explore, define, and visualize route planning without having to transition the design into a substrate place-and-route tool. It enables more insightful chiplet floorplanning and chiplet-to-package or chiplet- to-interposer signal assignments. The IBD methodology helps establish correct-by-design chiplet connectivity.

Thermal, Stress and Reliability Management

Chiplets can add complex behaviors in terms of heat dissipation and thermal interactions between the chiplets and the substrates. Substrate stackup material and chiplet proximity have considerable impact on thermal and stress performance. For this reason, predictive analysis before or during the prototyping/planning phase is very important. Starting analysis as far left in the process as possible allows for maximum flexibility in making material choices and tradeoffs. Designers should generate power-aware thermal and stress device-level models to provide greater accuracy for thermal and mechanical simulations. Using a combination of chip-level and package/system thermal modeling, warpage, stress and fatigue points could be identified earlier in the design phase.

Test and Testability

Heterogeneous chiplet designs are very different from traditional designs. IEEE test standards are being developed to accommodate these 2.5D test methods. Different tool vendors may deploy different approaches in implementing these standards, which may cause test compatibility issues of chiplets that use different DFT vendor tools. For board level testing, a composite BSDL file for each of the internal components is preferred, but may not necessarily be supported by all DFT tool vendors, which further complicates the PCB level testing.

Although each chiplet is assumed to be delivered as a known-good-die (KGD), each still needs to be re-tested after being assembled into the 3D-IC package. As such, a production test program must be provided for each of the internal components of the 3D-IC package. The tests need to run from the external package pins, most of which are not connected directly to the chiplet pins. In addition to the individual die testing, the die-to-die interfaces between chiplets need to be functionally tested as well.

Driving Verification and Signoff

To be able to release into manufacturing with confidence, we have to make sure that all the devices and substrates work together as expected. For this, it is important to start verification in the planning process and continue throughout the layout process. Such in-design validation provides early identification and resolution of manufacturing issues without running the full sign-off flow. When it comes to final design verification, it is important to also analyze various layout enhancements that will improve yield and reliability.

For more details on the “Five Key Workflows that Deliver 3D IC Packaging Success”, you can download the whitepaper published by Siemens EDA.

Also Read:

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs

A faster prototyping device-under-test connection

IC Layout Symmetry Challenges


WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs

WEBINAR: Intel Achieving the Best Verifiable QoR using Formal Equivalence Verification for PPA-Centric Designs
by Synopsys on 08-30-2022 at 10:00 am

Synopsys Fusion Compiler

Synopsys Fusion Compiler offers advanced optimizations to achieve the best PPA (power, performance, area) on today’s high-performance cores and interconnect designs. However, advanced transformation techniques available in synthesis such as retiming, multi-bit registers, advanced datapath optimizations, etc. are of little value if they cannot be verified through Formal Equivalence Verification (FEV). FEV setup must be rapid and provide out-of-the-box results to avoid becoming a bottleneck on advanced designs.

In this Synopsys webinar, Intel will share how it achieved the best QoR (Quality of Results) with an aggressive frequency target (3-4GHz). Using advanced optimization techniques, such as ungrouping and sequential optimizations, resulted in faster FEV convergence with a significant reduction in verification runtime as opposed to the long setup and runtimes designers face with traditional methods.

Attendees will walk away with an understanding of how Synopsys Formality Equivalence Checking captures the design transformation/optimizations in Formality Guide Files (SVF) for rapid setup of the verification environment to avoid multiple iterative runs. In addition, ML-driven adaptive distributed verification techniques will be highlighted, which help to partition the design and run solvers in parallel to further accelerate verification runtime and out-of-the-box results.

Register Here

Speakers

Listed below are the industry leaders scheduled to speak:

Avinash Palepu

Product Marketing Manager, Sr. Staff
Synopsys

Avinash Palepu is the Product Marketing Manager for Formality and Formality ECO products at Synopsys. Starting with Intel as a Design Engineer, he has held various design, AE management, and product marketing roles in the semiconductor design and EDA industries.

Avinash holds a master’s degree in EE from Arizona State University and a bachelor’s degree from Osmania University.

Sidharth Ranjan Panda

Engineering Manager
Intel Corporation

Sidharth Ranjan Panda has 10 years of experience in the VLSI industry. He is responsible for execution and signoff convergence activities for formal equivalence verification, low-power verification, and functional ECO closure for all SoC/IP programs in the NEX BU at Intel. He is a major contributor to the development of verification tools, flows, and methodologies at Intel. Sidharth holds a master’s degree in EE from the Birla Institute of Technology and Science, Pilani.

Register Here

 

Fusion Compiler features a unique RTL-to-GDSII architecture that enables customers to reimagine what is possible from their designs and take the fast path to achieving maximum differentiation. It delivers superior levels of power, performance and area out-of-the-box, along with industry-best turnaround time.

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software™ partner for innovative companies developing the electronic products and software applications we rely on every day. As an S&P 500 company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and offers the industry’s broadest portfolio of application security testing tools and services. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing more secure, high-quality code, Synopsys has the solutions needed to deliver innovative products. Learn more at www.synopsys.com.

Also read:

An EDA AI Master Class by Synopsys CEO Aart de Geus

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow

DSP IP for High Performance Sensor Fusion on an Embedded Budget


A faster prototyping device-under-test connection

A faster prototyping device-under-test connection
by Don Dingee on 08-30-2022 at 6:00 am

ProtoBridge from S2C provides a high bandwidth prototyping device-unider-test connection

When discussing FPGA-based prototyping, we often focus on how to pour IP from a formative SoC design into one or more FPGAs so it can be explored and verified before heading off to a foundry where design mistakes get expensive. There’s also the software development use case, jumpstarting coding for the SoC before silicon arrives. But there is another shift left use case – closer to project start when a pile of IP in the prototyping platform is tethered to a development host looking for signs of life. Creating a fast, productive prototyping device-under-test connection is the topic of a new white paper from S2C.

Facing the classic embedded design bring-up conundrum

Embedded designers have faced a similar board-level challenge for years. Bringing up a board requires being able to see into it with some interface. But a complex interface like USB requires both working hardware and a protocol stack for communication. When it works, it’s great. When it doesn’t, it can be hard to get visibility into what’s wrong. Ditto for other complex blocks where visibility is challenging.

Running around the board’s interior with a logic analyzer gets is where the fun, or lack thereof, begins. Traces may be tough to probe. Getting stimulus and triggering right to see the problem as its happening can be elusive. Moving a wide bus setup to different points around the board takes time. The more logic gets swept up into large devices like ASICs and FPGAs, the harder physical probing gets, and the more visibility drops.

Hoping to solve visibility in the embedded world, JTAG emerged first as a simple daisy-chain connection between chips or logic blocks inside an ASIC or FPGA. A JTAG analyzer is a simple gadget with four required wires and a fifth optional wire. The scheme works, but it can be painfully slow. If one is to solve the bring-up conundrum on an FPGA-based prototyping platform, which is a complex board with some big FPGAs, something much better is needed.

Moving to native interfaces on both sides of the workflow

Fortunately, two interfaces exist that fit perfectly into an FPGA-based prototyping device-under-test connection.

  • AXI is a ubiquitous native interconnect between IP blocks on the device-under-test side. It’s a single initiator, single target protocol in basic form, but it extends easily into an N:M interconnect that can scale topology for connecting more IP blocks.
  • PCIe is easy to find in or add to development hosts. It’s fast, cabling is simple, and a host device driver is straightforward.

S2C brings in the ProtoBridge System, a transactor-level interface with speed and visibility that improves test coverage and productivity, especially in the early phases of design. Here’s a conceptual diagram.

Transactors offer a way to bridge the gap between behavioral models typical of a test and verification environment and RTL models running in hardware on the FPGAs. The other benefit is that it allows software developers and test engineers to work in a familiar environment, C/C++, instead of RTL or FPGA constructs.

And the requisite speed is there. Some tests may require transferring big stimulus files, like videos or complex waveforms. Stimulus can be stored on the host until needed for a test, then transferred for execution at full PCIe bandwidths up to 4 Gb/sec.

As the project progresses, things shift from an initial bring-up mode to test coverage mode. The ProtoBridge scheme remains the same from start to finish, so teams don’t have to bring an array of different tools to the party. Workflows are smoother, productivity improves, and time is freed up for deeper exploration and testing of designs – all benefits of a shift left strategy.

To get the entire white paper and see the rest of the S2C story on the high bandwidth prototyping device-under-test connection and more background on ProtoBridge, hit the link below, then scroll down to this title:

White paper: High-bandwidth PC-to-DUT Connectivity with ProtoBridge


IC Layout Symmetry Challenges

IC Layout Symmetry Challenges
by Daniel Payne on 08-29-2022 at 10:00 am

1D symmetry

Many types of designs, including analog designs, MEMs, and image sensors, require electrically matched configurations. This symmetry has a huge impact on the robustness of the design across process variations, and its performance. Having an electrically matched layout basically means having a symmetric layout. To check the box of electric matching during verification, designers must also check the symmetry of their design.

Design symmetry is defined as either 1D or 2D. 1D symmetry is the symmetry around the x-axis or the y-axis, while 2D symmetry is the symmetry around the center of gravity.

One approach to achieving 1D symmetry is the quarter cell method, in which a cell is placed four times around a common X and Y axis.

Quarter cell layout method

In 2D layout symmetry, there is a common centroid, and this common symmetry can be between multiple devices, or even groups of devices. Here’s an example of common centroid symmetry on the left, while the right side is not a common centroid:

Common centroid symmetry

You can just eyeball an IC layout to see if it’s symmetric, but that’s not precise at all. Another approach is to write specific rule checks, based on your design experience, to catch symmetry violations. A more scalable approach is to use the Calibre nmPlatform with one of three symmetry checking methods:

  • Batch symmetry checks
  • Calibre PERC reliability platform for electrically-aware symmetry checks
  • A new approach that leverages the Calibre RealTime tools to allow for an interactive symmetry checking experience

Taking an interactive approach, which avoids custom rule coding and supports iterative runs to immediately check changes, can offer the highest accuracy and greatest time savings.   This solution includes four kinds of interactive symmetry checks:

  • 1D symmetry
    • Symmetrical about X-axis
    • Symmetrical about Y-axis
  • 2D symmetry (aka common centroid)
    • 90° symmetry
    • 180° symmetry

With Calibre’s interactive solution, the symmetry checks give live feedback to pinpoint any violations for fixing, and DRC checks can also be run in parallel to add even more efficiency. An example of the graphical feedback is shown for a magnetic actuator, where the metal layer on the right of this MEMS circuit has a symmetry violation highlighted in cyan color:

Symmetry violation in Cyan color

Verifying that two devices in a differential-pair op-amp are symmetrical involves selecting the device area, running the symmetry check, and viewing the results to pinpoint any violations.

Selecting device area inside green rectangle
Metal 1 symmetry violations in Red

The error markers tell the layout designer where the symmetry violation is, and whether it is caused by missing or extra polygons, shifted polygons, or a difference in polygon size. Fixing symmetry violations may create new DRC violations, making it advantageous to run both kinds of checking (symmetry & DRC checks) at the same time to reduce the time needed to achieve a symmetric DRC-clean design, such as is possible with the Calibre toolsuite.

Summary

Analog IC layouts design can be a long, manual and error-prone process, leading to lengthy schedules, especially when symmetry is required to achieve optimum performance. Traditional approaches, like coding custom symmetry checks or doing visual inspections, can take up valuable time and resources.

The approach of using a commercial tool like the Calibre nmPlatform to perform interactive symmetry checking is a welcome relief. Because the interactive results happen so quickly, the layout designer is alerted to the specific areas that are violating symmetry rules and is then able to apply fixes, saving valuable time. Being able to check for both symmetry and DRC violations interactively is a nice combination to ensure a symmetric DRC clean design.

Read the complete white paper online here.

Related Blogs


Verifying 10+ Billion-Gate Designs Requires Distinct, Scalable Hardware Emulation Architecture

Verifying 10+ Billion-Gate Designs Requires Distinct, Scalable Hardware Emulation Architecture
by Daniel Nenni on 08-29-2022 at 6:00 am

960 x 540 Veloce

In a two-part series, Lauro Rizzatti examines why three kinds of hardware-assisted verification engines are a must have for today’s semiconductor designs. To do so, he interviewed Siemens EDA’s Vijay Chobisa and Juergen Jaeger to learn more about the Veloce hardware-assisted verification systems.

What follows is part one, a condensed version of his discussion with Vijay Chobisa,

Product Marketing Director in the Scalable Verification Solution division at Siemens EDA. They talk about why verification of 10+ billion-gate design requires a distinct architecture.

LR: The most recent announcement from Siemens EDA in the hardware-assisted verification area included an expansion of the capabilities of the Veloce Strato emulator with a new Veloce Strato+, Veloce Primo for enterprise prototyping, and Veloce proFPGA for desktop type prototyping. What has been the customer response to these new capabilities?

VC: Last year, we announced Veloce Strato+ emulation platform, Veloce Primo Enterprise prototyping system, and Veloce proFPGA traditional prototyping system. The response from our customers has been fantastic. Let me briefly go through what the new capabilities consist of.

When we announced Veloce Strato+, we specified the ability to emulate a single instance of an SoC design as big as 12-billion gates. Today, we have several customers emulating 10- to 12-billion gate designs on Strato+. Such designs are becoming popular in the areas of AI and machine learning, networking, CPUs/GPUs, and other leading-edge industries. What customers really like about Veloce Strato+ is the ability to move their entire verification environment from Veloce Strato by just flipping a single compile switch and get 1.5X capacity in same footprint.

A unique feature of Veloce Primo is the interoperability with Veloce Strato+. A customer owning both platforms can switch from emulation to prototyping when designs reach stability, to run at a much faster speed, and in the process get more verification cycles at lower cost. They can also switch back from prototyping to emulation when they run into a design bug and need to root cause the issue, correct the design and verify the issue has been removed. Veloce Strato+ excels in design turnaround time (TAT) by supporting 100% native visibility, like in simulation, with rapid waveform generation, as well as fast and reliable compilation.

Our Veloce proFPGA prototyping delivers ultra-high speed in the ballpark of 50-100 MHz to support software teams performing software validation.

LR: How do you describe the advantages of the Veloce platform approach and how it differs from other hardware assisted verification approaches?

VC: Let me start by saying that we are working very closely with our partner customers and we attentively monitor their roadmaps to make sure that we are providing verification and validation tools to address their challenges. Let me touch on some of those.

We have seen three common design trends across several industries. First, designs are becoming very large. Second, thorough verification and validation of such designs before tape out requires the execution of vast amount of software workloads, run different use cases. Third, power and performance have become critical. To deal with all of the above, we had to address three fundamental aspects of the Veloce platform: design capacity, design compilation, and design debug.

When I say that Veloce Strato+ can emulate a 12-billion gate design, that’s not by luck. We designed a unique architecture that can scale in all three aspects. Veloce Strato+ is a robust system that can map and emulate monolithic designs of up to 12-billion gates, executing large workloads for a very long time with reliability and repeatability.

The Veloce Strato+ compiler can map 12-billion gates designs in less than a full day. Customers would like to get a minimum of one or two turns a day in order to use emulation effectively for doing verification. Our compiler takes advantage of design structures assembled by customers and advancements in processor/server topology. We developed technologies called template processing, distributed Velsyn, and technology like ECO compile. Our goal is to allow users to perform a couple of compilations per day, even for very large designs.

The final aspect is design debug. Customers coming from simulation would like to see exactly the same debugging environment in emulation just running faster at larger scale. We support the same Visualizer GUI across simulation, emulation and prototyping.

A unique differentiator of the Veloce architecture is fast waveform generation regardless of the design size. For example, Veloce can capture the data for every node in the design for one-million cycles and generate waveforms within five minutes, regardless of the design size. Whether the design is half a billion gates or four-billion gates or 12-billion gates, the time to generate the entire set of waveforms for one-million cycles is the same.

The bottom line is that the Strato+ architecture scales not only in terms of capacity, but also in terms of infrastructure to provide an efficient environment for compiling, running and debugging large designs, rapidly, accurately, and reliably. Users can run emulation, find a bug, fix the design and validate the change all within a day.

All of the above are advantages of Veloce Strato+ vis-à-vis our competition. As of today, Strato+4M is the only emulator on the market that can emulate 10- to 12-billion gates monolithic designs efficiently and consistently.

LR: Let’s look into the future. What are you hearing from customers and potential Veloce users about additional challenges and needs for hardware assistive verification tools for the next three years?

VC: As I mentioned, we work closely with our customers to design better hardware-assisted verification products that meet customer requirements. Let me give you some examples of how we solve customer issues.

Traditionally emulation in the storage market is used in in-circuit-emulation (ICE) mode where the design is connected to and driven by physical devices. This use mode is inherently limited when it comes to measuring bandwidth or I/O traffic performance, debug and access by teams from different geographies.

Instead, we built a solution based on virtual devices in close collaboration with top storage customers. We have many customers using Veloce to verify their design and software by using ICE setups. However, our customers are adopting more and more a VirtuaLAB-based use model due to the above changes with ICE. Today, storage customers using Veloce Strato+, can do exactly what they were doing in the ICE environment plus measure very accurately I/O bandwidth traffic and error injection to test corner case scenarios. They can also perform power analysis, a critical objective in the storage industry.

Our approach is to work with customers in each vertical market segment to understand their challenges and build an efficient and effective solution.

As already mentioned, power and performance are becoming critical not only for customers designing smart devices, but also for semiconductors powering up HPC and data centers. Again, we work closely with these customers to allow the “shift-left” in power profiling, power analysis, and to generate accurate power numbers by running software applications, customer workload and benchmarks. This early power proofing allows them to impact RTL code and software to ensure that their power budget is within the envelop and also that they can deliver the required performance.

Another aspect is functional safety. In some market segments today, functional safety is becoming very important. People designing autonomous cars or chips in the mil-aero industry consider functional safety verification a critical need. Customers are looking for the ability to inject a fault and verify that the design hardware or software. They also need end-to-end FuSa solutions where they can do the analysis, generate fault campaigns, and output hardware metrics to see whether the design is ISO 26262 compliant or not. That is the focus in our organization. We are delivering end-to-end FuSa solutions where customers can fully rely on an ISO 26262 certified solution coming from Siemens EDA to validate their chips.

Looking into the future, power, performance and functional safety will continue to grow in importance.

LR: To conclude, you have been working in the hardware-assisted verification domain for quite a while. What are some of the aspects of the job that continue to motivate and fascinate you most?

VC: Siemens is a great company to work for. The Siemens culture, the processes, and the open-door policy are highly motivating for me. Open door means that you can approach anyone, people interact and cooperate with each other as a team. We do not pursue our individual success, rather we aim to achieve our division success, our company success and, by reflection, that makes us successful. We all recognize each other.

As important for me is that at Siemens we are able to drive our roadmaps and not depend on anybody else. I love to talk to customers, learning what they are doing, and where they are going in five years down the road. Understanding the above, feeding it back to the division to build the products around it pleases me greatly.

LR: Thank you, Vijay.

VC: Thank you, Lauro.

Also read:

UVM Polymorphism is Your Friend

Delivering 3D IC Innovations Faster

Digital Twins Simplify System Analysis


GM: Where an Option is Not an Option

GM: Where an Option is Not an Option
by Roger C. Lanctot on 08-28-2022 at 10:00 am

GM Where an Option is Not an Option

How does a General Motors executive react when they get a transfer to work at OnStar? “What am I going to tell my partner?”

Twenty-six years after its founding, OnStar remains an appendage to GM – a team set apart from the heart and soul of the larger company. Team members assert that the group is profitable, thanks to millions of GM subscribers, but it has less control over its destiny as hardware responsibilities were removed years ago.

Though profitable, the group’s revenue is not sufficiently material to GM’s results to merit a mention on earnings calls. Not a one. Sure, Cruise may be burning $550M a quarter gumming up traffic in San Francisco, but a profitable OnStar? Ghosted.

The “otherness” of the OnStar organization was made apparent during GM’s journey through chapter 11 bankruptcy after the “great recession.” As the company looked to potentially sell assets the one division that attracted avid attention was, in fact, OnStar – with Verizon being one of the potentially interested buyers.

OnStar is one of the most powerful brands – globally – in the connected car space and, yet, its parent company continues to hold the group at arm’s length. A service that should long ago have become synonymous with GM and a brand defining gem somehow retains the status of an albatross.

The latest evidence of this is rife in recent announcements. OnStar announced a new “cleaner” logo – whatever that means – and is venturing beyond cars to offer safety and security services to pedestrians, hikers, and motorcyclists while offering in-home services via Alexa.

Meanwhile, news arrives this week that GM is making OnStar a non-optional $1,500 three-year subscription on Buick and GMC vehicles. According to the GM Authority newsletter, “the automaker will equip all new 2022 and 2023 model year Buick and GMC vehicles with a three-year OnStar and Connected Services Plan. The plans cost between $905 and $1,675, depending on the chosen trim level.”

The newsletter indicates that the cost is to be included in the vehicles’ MSRP “however the online configurator tools for the Buick and GMC brands suggest these charges are added on top of the MSRP, for the time being,” The services include remote keyfob, Wi-Fi data,, and OnStar safety services.

Making OnStar a non-optional option sends some powerful and unfortunate marketing messages to the car-buying public including:

A)    GM is removing the power of choice from your connected car decision-making.

The news is a sad echo of GM’s announcement in 2011 under then-OnStar president Linda Marshall that the company would reserve the right to compile and sell information about drivers’ habits even after users discontinue the service – unless a user explicitly opts out. The announcement led to Congressional calls for an investigation by the Federal Trade Commission and likely contributed to Marshall’s early exit from her leadership role at OnStar in the following year.

B)    GM has failed to find a sufficiently compelling application or combination of applications to drive OnStar adoption organically.

In the early days of OnStar, before smartphones, the fear factor was a powerful motivator for selling the service to consumers. If you’re OnStar-equipped GM vehicle was involved in a crash, OnStar would automatically summon assistance. It is a capability that was ultimately mandated in Europe as so-called eCall in all cars.

In a post-smartphone world, the average driver doesn’t think they are going to be involved in a crash and, even if they are, they are convinced they’ll be able to call for assistance on their own – provided they are conscious. OnStar has tried to enhance this automated crash functionality with built-in Wi-Fi services and, more recently, access to Alexa. But the enhancements are insufficiently compelling.

C)    The built-in connection in the car is some sort of add-on device that must be paid for separately.

It is no mystery that building connectivity into cars is an expensive business. The hardware and software is expensive and the back-end secure network operating center is a further source of cost and liability. To that can be added the dedicated call center and, of course, the wireless service itself.

The reality is that no one can buy a new car in the U.S. today – or Europe or China, for that matter – that isn’t equipped with a wireless connection. Wireless connectivity is a comes-with proposition in the auto industry today. Some amount of cost for the hardware, software, service, and infrastructure has to be built into every car. GM is the first auto maker brazenly putting a price to it and shoving it in the customer’s face. It’s not a promising strategy.

The value proposition of the in-car connection long ago shifted from the customer to the car maker and the dealer. Auto makers stand to benefit mightily from being connected to their cars and their customers. Auto makers should never give consumers any reason to think twice about the connectivity devices in their cars.

Via connectivity, car companies can anticipate vehicle maintenance issues and possibly prevent failures; they can respond to crashes and breakdowns in a timely manner; and maybe they can more readily identify and remedy vehicles with outstanding recalls. Car makers are entitled to compensation for providing vehicle-centric cell service and the vast majority of consumers are willing to pay.

SOURCE: Strategy Analytics consumer survey results from upcoming report.

Soon to be published Strategy Analytics research shows a strong inclination among cosumers to pay for service packages associated with their cars. In fact, a majority of those consumers across a range of demographics and regions are willing to pay upfront. But not all.

GM’s announcement is an inelegant approach to solving a problem facing the entire industry. Simply put, car companies can no longer afford to sell cars for a one-time price and be done. Cars must be connected. Software must be protected and updated. A long-term subscription-centric strategy is unavoidable.

GM is putting all the onus for subscription collection on OnStar which is more or less walled off from the rest of GM. OnStar is not intimately integrated into the customer-dealer relationship and the latest initiatives from OnStar – focused on extra-vehicular use cases – suggest further straying from a focus on connected cars.

A big question looms over GM and the industry: What is the strategy for monetizing vehicle connectivity? Will it be jacking up OnStar/telematics subscriptions, building the cost into cars upfront, charging for features on demand?

To achieve long-term vehicle=based revenue production will necessitate more smoothly integrating OnStar into the GM vehicle ownership and dealer experience. The entire purpose of vehicle connectivity is customer retention.  GM needs better marketing and messaging to keep the company in the forefront of the connected car industry. This latest messaging is an amazing marketing failure and a non-starter for most consumers.

One company can take heart from GM’s stumble. After becoming the poster child for features-on-demand by announcing plans to charge a subscription for access to heated seats, BMW will have a bit of schadenfreude at GM’s expense – this time.

Also read:

C-V2X: Talking Cars: Toil & Trouble

Automotive Semiconductor Shortage Over?

Auto Makers Face Existential Crisis

Time for NHTSA to Get Serious


C-V2X: Talking Cars: Toil & Trouble

C-V2X: Talking Cars: Toil & Trouble
by Roger C. Lanctot on 08-28-2022 at 6:00 am

C V2X Talking Cars Toil and Trouble

Last year, the U.S. Federal Communications Commission sought to resolve the lingering dispute over the use of 75MHs of Wi-Fi spectrum in the 5.9MHz range, previously allocated to the automotive industry for safety applications, by designating 45MHz of that spectrum for unlicensed use while preserving 30MHz for automotive safety. The move opened the door to cellular-based C-V2X deployments – kicking aside the DSRC (dedicated short rage communications) technology favored by many auto makers.

ITS America and the American Association of State Highway and Transportation Officials (AASHTO) filed suit to block the action in court. Last week the court ruled in favor of the FCC.

Prior to that ruling, three auto makers (Jaguar Land Rover, Audi of America, and Ford Motor Company), nine hardware manufacturers (of roadside and in-vehicle devices), along with several state transportation authorities sought waivers from the FCC to proceed to deploy C-V2X technology and/or to replace older so-called DSRC V2X technology. Weeks ago, comments supporting the waiver requests poured in and action is expected within weeks.

Several dozen entities filed comments with the FCC on C-V2X waiver filings, overwhelmingly supporting immediate deployments of C-V2X in the 5905-5925 MHz band. Opposition from pro-DSRC parties was muted. California, Colorado, South Dakota, Wyoming, Tampa, FL, Atlanta & Alpharetta, GA, DOTs all asked the FCC to expeditiously approve the waiver requests.

Notably, ITS America and AASHTO – the groups that challenged the FCC decision allocating 45 MHz of the ITS band to unlicensed use and the remaining 30 MHz to C-V2X – filed comments strongly supporting the waivers.

 The judge’s ruling against ITS America and AASHTO and in favor of the FCC, ought to be the last word. But perhaps not.

The battle continues over V2X technology, which began with an FCC allocation of 75MHz of spectrum in 1999 for DSRC. In a recent Automotive News “Shift” podcast, manager of vehicle technology at Consumer Reports Kelly Funkhouser spoke out in favor of DSRC as a technology that could dramatically reduce vehicle collisions and the related fatalities. Funkhouser displayed a frightening lack of awareness of C-V2X tech and an unfortunate partiality to older, now defunct DSRC tech.

Similarly, a podcast produced by the Alliance for Automotive Innovation – which included participants from the U.S. DOT, and the National Transportation Safety Board (NTSB), appeared to be promoting DSRC technology as a technology that could eliminate 90% of highway crashes and related fatalities. The NTSB representative, in particular, advocated DSRC – the older technology that will be compromised by the FCC re-allocation.

The AAI podcast was especially disturbing for its emphasis on the risk of signal interference within the more limited 30MHz of allocated spectrum and the unlicensed use of the nearby 45MHz. This concern verged on disinformation in association with C-V2X technology.

The comments on the AAI podcast were reminiscent of similar skepticism and resistance toward C-V2X technology expressed at the ITS America summit in Charlotte, N.C., last November. During a panel discussion at the event a U.S. DOT executive did his best to raise questions regarding the efficacy of C-V2X.

The AAI has a 10-point plan for the promotion and adoption of V2X technology, notably not distinguishing C-V2X and still appearing to put a thumb on the scale in favor of DSRC. You can judge for yourself here: https://www.autosinnovate.org/about/advocacy/V2X%20Policy%20Agenda.pdf

The sad reality vis-à-vis the AAI is that the Infrastructure Bill already passed through Congress is filled with a wide array of funding and initiatives to promote the adoption of C-V2X technology specifically – again, reflecting the reality that DSRC is a dead letter.

With waivers expected to be granted and the recent judicial victory for the FCC, the path is finally clear in the U.S. for widespread C-V2X adoption and deployment. In this context, it is time for the DSRC crowd to dismount from the barricades and accept reality and, really, fundamentally, stop standing in the path of progress.

Also read:

Automotive semiconductor shortage over?

Auto Makers Face Existential Crisis

Time for NHTSA to Get Serious


Podcast EP103: A Look at the Game-Changing Technology Being Built by Luminous Computing

Podcast EP103: A Look at the Game-Changing Technology Being Built by Luminous Computing
by Daniel Nenni on 08-26-2022 at 10:00 am

Dan is joined by Michael Hochberg, president at Luminous Computing. His career has spanned the space between fundamental research and commercialization for over 20 years. He founded four silicon photonics companies garnering a total exit value of over a billion dollars.

Dan explores the computing technology being built by Luminous and the fundamental impact it will have on the growth of AI applications and the AI market in general.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.