RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification

Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification
by Daniel Nenni on 02-04-2022 at 10:00 am

Dan is joined by Philippe Luc, director of verification at  Codasip. Philippe has spent over 20 years in verification which includes an extensive and successful career at Arm. Philippe gained engineering experience with a list of significant achievements during his time there, including:

  – Design and verification of coherent caches for the first multiprocessor core from Arm (Cortex-A9)

  – Lead development of random test bench for L1&L2 caches, used on most A & R class processors

  – Initiate and lead the development of one of the major random generator used on all application processors

  – Verification lead of Cortex-A17 core

Today, Philippe leads Codasip’s growing verification team from France, a key part of Codasip’s increasingly global team. His mission is to focus on boosting the quality of RISC-V processor IP, and to do so efficiently. Dan explores why bug tracking is so important with Philippe and how the process can impact the quality of designs.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Codasip on SemiWiki.com


CEO Interviews: Kurt Busch, CEO of Syntiant

CEO Interviews: Kurt Busch, CEO of Syntiant
by Daniel Nenni on 02-04-2022 at 6:00 am

Syntiant Busch Headshot

Named Ernst & Young’s Entrepreneur of the Year® 2021 Pacific Southwest – Orange County, Kurt Busch is a tech industry veteran with extensive experience in product development, having driven the successful launch of new products, ranging from SaaS and semiconductors for telecom and broadcast video to consumer electronics and data center systems. Prior to founding Syntiant Corp., Busch was president, CEO and board of director at Lantronix (NASDAQ: LTRX), a global provider of secure data access and management solutions for Internet of Things (IoT) and information technology (IT). He is an engineering hall of fame inductee of the University of California at Irvine, where he earned bachelor’s degrees in electrical engineering and biological science. He also holds an MBA from Santa Clara University.

Can you tell us a little about Syntiant?

We founded Syntiant in 2017 with the idea of building a new kind of processor that will bring artificial intelligence to most any edge device. At the time, AI was the domain of cloud computing, and no one was thinking of putting significant deep learning processing into devices that operated at the edge. Today, we shipped more than 20 million of our Neural Decision Processors worldwide, making edge AI a reality for always-on voice, sensor and image applications in a range of consumer and industrial use cases, free from cloud connectivity, ensuring privacy and security.

What is unique about the company and its product technology?

We designed our technology as a complete turnkey system by combining purpose-built silicon with an edge-optimized data platform and training pipeline. Syntiant’s devices typically offer more than 100x efficiency improvement, while providing a greater than 10x increase in throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. Using at-memory compute and built-in standard CMOS processes, Syntiant devices directly process neural network layers from platforms such as TensorFlow without the need for any secondary compilers, which shortens time to market and offers unprecedented performance for solutions that require under 1mW power consumption.

What industries are Syntiant addressing? 

Syntiant’s deep neural network processors are being designed into all kinds of end uses from earbuds to automobiles. We are working with about 80 customers globally across market segments including consumer, medical and industrial IoT. Our NDP100 and NDP101 are being used for always-on voice applications. The NDP102 for sensor processing. Our NDP120 for speech and sensor fusion and the NDP200 for vision and image recognition. We went from just offering voice to an expanded product line that includes sensor, audio and image processing, as well as offering the data and training too, providing customers with low-cost, low latency, end-to-end solutions that quickly deliver production grade deep learning models in a variety of domains.

What problems/challenges are you solving?

We’re moving AI from the cloud to the edge. Production deep learning models require significant data and training expertise, as well as significant processing power. The lack of clean data, training expertise and sufficient processing power has created fundamental blockers for mass edge AI deployments. Syntiant has tackled these fundamental challenges. First, with custom silicon delivering best in class performance while still meeting size, power and cost constraints for massive edge deployments. Second, the ability to collect, clean, align and generate data for ML training, and lastly, providing a training pipeline, optimized for edge applications, that can go from raw data to production quality machine learning models in an economical manner.

What’s new?

There is a lot of discussion about the democratization of AI, enabling most anyone to utilize the benefits of machine learning and not just the big Internet companies. While we usually deal with large volume customers, we also want to expand the reach and availability of AI. That’s why we launched our new TinyML Development Board for building low-power voice, acoustic event detection and sensor ML applications. This collaboration with Edge Impulse now enables anyone, from individual developers and hardware engineers to small companies to design, build and deploy highly accurate ML applications that respond to speech, sounds and motion with minimal power consumption. Whether it is for a wearable, industrial product or even to assist with people with disabilities, the possibilities are endless with our new TinyML board that provides a full solution for bringing the power of artificial intelligence to almost any device.

What’s next for AI at the edge?

We’ve just begun to scratch the surface on how AI will impact people’s everyday lives. Using Syntiant technology, devices can hear, speak, see and feel, making natural interfaces the path to the future. Advances in AI already are having a profound impact on many societal issues, including how voice technology can help those with disabilities and the elderly, as well as those in remote parts of the world with limited or no Internet access. As AI pervasiveness grows globally, so do myriad applications for public health like our collaboration with Canary Speech, a leader in the voice digital biomarker industry. Our joint deep learning solution enables real-time patient monitoring to detect health conditions such as Alzheimer’s disease, anxiety, depression, as well as a complex voice energy measurement. We’ve also seen AI play a big part in the industrial IoT landscape. Until now, predictive maintenance and condition-based monitoring usually has been done in the cloud. That said, we just announced a collaboration with Ceramic Speed for their Bearing Brain project, which moves prediction and forecasting down to the battery-powered sensor device itself to reduce or eliminate unforeseen maintenance costs. Our technology can continuously monitor sounds, vibrations and even temperature with minimal drain on power consumption, extending battery life by months or years, while improving performance, productivity and efficiency across a wide range of manufacturing applications.

Also read:

CEO Interview: Mo Faisal of Movellus

CEO Interview: Fares Mubarak of SPARK Microsystems

CEO Interview: Pradeep Vajram of AlphaICs


Waymo Collides with Transparency

Waymo Collides with Transparency
by Roger C. Lanctot on 02-03-2022 at 10:00 am

Waymo Collides with Transparency

Anyone looking to U.S. Transportation Secretary Pete Buttigieg to forthrightly assert a path-setting policy vision to guide autonomous vehicle development in the U.S. during his CES 2022 keynote was sorely disappointed. There was no guidance from the Secretary.

The issue has gained new urgency now that Waymo has sued the California Department of Motor Vehicles for allegedly sharing some Waymo-specific operational data with an unspecified inquiring third party. Outraged, Waymo is seeking an end to the sharing of its data relevant to how its autonomous vehicles operate or cope with specific circumstances.

Waymo complaint: https://www.courthousenews.com/wp-content/uploads/2022/01/waymo-calif-dmv-complaint.pdf

The lawsuit represents an important turning point in autonomous vehicle regulation. California lays claim to some of the most rigorous reporting requirements in relation to what is likely the largest group of licensed AV operators in the world.

The primary philosophy behind California’s autonomous vehicle regulatory policy is one of disclosure. Operators are obliged to report all disengagement events – where the safety driver has had to take over from the AV system. This, in turn, has created a battle among licensed operators to show the greatest distance traveled, on average, between disengagement events.

Waymo has used California’s reporting framework as a marketing tool to advertise its performance advantages over the numerous competitors operating in the state. Observers have grown frustrated that the disengagement-centric system is skewing AV development priorities in favor of favorable operating environments including location and time of day.

What is missing in the California regulatory regime is a minimum set of performance requirements, standards, or tests that operators must meet to receive their operating license. The AV regulation is performance based, but only in retrospect – and calling for mitigation in the event of failures for which the State seeks functional disclosures – that have allegedly been shared.

Ironically, since each licensed operator is generally pursuing its own bespoke path to autonomous operation it is unclear that any could benefit from learning about specific corrective measures that any other operator might have taken. All operators are presumably using similar mathematics, but each is using a unique portfolio of sensors and each has its own philosophical approach to writing its AV code.

The lawsuit highlights the lack of an adequate performance-based licensing or regulatory regime for AV operation on public roads. Each of the 50 U.S. states have pursued their own unique approaches – as have countries around the world.

The U.S. came close to establishing an AV regulatory regime at the end of the Obama administration, but fell short after unresolved disputes emerged over the number of AVs that would be exempted from Federal Motor Vehicle Safety Standards requirements such as brake pedals and steering wheels.

It is fairly clear that the Federal government is not in a position to establish a single path to autonomous operation. In this regard it is worth noting that the first AV operator to be granted an FMVSS waiver was Nuro – the maker of delivery bots.

What might work, as part of a process of setting AV operational standards, would be a series of operational tests that AV prototypes will have to pass – such as recognizing and responding to obstacles and other vehicles. Such an approach can be calibrated to establish some basic performance characteristics without giving an advantage to any particular operator or strategic approach.

It is worth noting that in the current global environment characterized by the existing regulatory vacuum, Mobileye, alone, has a unique advantage in putting forth its Responsibility-Sensitive Safety (RSS) framework. Mobileye says RSS “has advanced its way into both IEEE and ISO standards efforts recently.  Intel Senior Principal Engineer and Mobileye VP of Automated Vehicle Standards Jack Weast is chairing the IEEE effort to adopt a formal technical standard known as IEEE P2846: A Formal Model for Safety Considerations in Automated Vehicle Decision Making.”

Alone among operators, Mobileye is working to turn transparency into a competitive advantage. No competing operator has yet come forward to offer an equivalent vision – though Nvidia tried, and failed, with its Safety Force Field (SFF) alternative, which was quickly set aside.

While Mobileye touts RSS, competitors are left with smoke and mirrors. And Waymo clearly wants to keep that smoke and those mirrors in place – resisting requirements that it share elements of its disengagement mitigation. Waymo may be getting something of a comeuppance in California where General Motors’ Cruise may report some exceptional low disengagement figures – surpassing even Waymo – after operating exclusively at night.

It’s time for U.S. regulators to put forward some minimum performance requirements. The U.S. DOT’s National Highway Traffic Safety Administration has spent decades crashing cars. Isn’t it about time they started figuring out how to prevent cars from crashing in the first place?

I think it is. The Waymo lawsuit is a sign of the times and the time has come for change. The framework for regulation should be less focused on disclosure than it is on performance testing. Regulators should define the objectives and measure and monitor their achievement – anything less is an abdication of responsibility.

Also read:

Apple and OnStar: Privacy vs. Emergency Response

Musk: Colossus of Roads, with Achilles’​ Heel

RedCap Will Accelerate 5G for IoT


Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers

Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers
by Mike Borza on 02-03-2022 at 6:00 am

Figure 2 Automotive Security Diagram

Remember when a pair of ethical hackers remotely took over a Jeep Cherokee as it was being driven on a highway near downtown St. Louis back in 2015? The back story is, those “hackers,” security researchers Charlie Miller and Chris Valasek, approached vehicle manufacturers several years before their high-profile feat, warning of the risks that security vulnerabilities posed for cars. However, manufacturers at the time didn’t consider cars to be targets for cyberattacks.

With the amount of hardware and software content enabling greater automation, vehicles actually have many potential points of security vulnerability—much like many of our other smart, connected IoT devices. Let’s take a look at key automotive areas that should be protected, why it’s important to keep security in mind starting early in the design cycle, and how you can protect the full car from bumper to bumper.

ECUs: Irresistible to Hackers

We can start our discussion with electronic control units (ECUs), the embedded systems in automotive electronics that control the electrical systems or subsystems in vehicles. It’s not uncommon for modern vehicles to have upwards of 100 ECUs running functions as varied as fuel injection, temperature control, braking, and object detection. Traditionally, ECUs were designed without the requirement that they validate the entities with which they communicate; instead, they simply accepted commands from and shared information with any entity on the same wiring bus. Vehicle networks were not considered to be communications networks in the sense of, say, the internet. However, this misconception has created the biggest vulnerability.

Going back to the Jeep hack, Miller and Valasek set out to demonstrate how readily ECUs could be attacked. First, they exploited a vulnerability in the software on a radio processor via the cellular network, then moved on to the infotainment system, and, finally, targeted the ECUs to affect braking and steering. That was enough to get the automotive industry to start paying more attention to cybersecurity.

Today, it’s common for ECUs to be designed with gateways, so that only those devices that ought to be talking to each other are doing so. This presents a much better approach than having a wide-open network in the vehicle.

How Infotainment Systems Can Be Exploited

In addition to ECUs, cars can include other vulnerabilities that can allow a bad actor to hopscotch from one device inside the vehicle to another. Consider the infotainment system, which is connected to cellular networks for activities such as:

  • Firmware updates to cars from vehicle manufacturers
  • Location-based roadside assistance and remote vehicle diagnostic services
  • Increasingly in the future, vehicle-to-vehicle and vehicle-to-everything functions

The thing is, infotainment systems also tend to be connected to various critical vehicle systems to provide drivers with operational data, such as engine performance information, as well as to controls, ranging from climate control and navigation systems to those that tie in to driving functions. Infotainment systems also increasingly have some level of integration with the dashboard—with modern dashboards becoming a component of the infotainment display. Given all the connections that exist in this vehicle subsystem and the powerful, full-featured software on them that performs these functions, it is probable that someone will exploit a vulnerability to hack into them.

Safeguarding In-Vehicle Networks

To prevent such attacks, it’s important to apply physical or logical access controls on what type of information gets exchanged between more and less privileged subsystems of the network. To ensure that the communications is authentic, it is also critical for in-vehicle networks to tap into the security experience gained over the past 30 years in the networking world by combining strong cryptography with strong identification and authentication. All these measures should be planned early in the design cycle to provide a robust security foundation for the system. Doing so early is less labor intensive, less costly, and more effectively scrutinized for residual risk than incorporating security measures piecemeal to address problems that emerge later.

The increasing popularity of Ethernet for in-vehicle networks is a positive development. Ethernet comes with some cost savings and some powerful networking paradigms that support the speeds needed for applications like advanced driver assistance systems (ADAS) and autonomous driving, as well as increasing applications of infotainment systems. Part of the Ethernet standard provides for devices identifying themselves and proving their identify before they are allowed to join the network and perform any critical functions.

NHTSA Automotive Cybersecurity Best Practices

The National Highway Traffic Safety Administration (NHTSA) suggests a multilayered automotive cybersecurity approach, with a better representation of the in-vehicle system as a network of connected subsystems that may each be vulnerable to cyberattack. In its updated cybersecurity best practices report released this month, NHSTA provides various recommendations regarding fundamental vehicle cybersecurity protections. Many of these would seem to be common-sense practices for development of critical systems, but these practices have been (and even continue to be) surprisingly absent from many. Among the suggestions for a more cyber-aware posture:

  • Limit developer/debugging access in production devices. An ECU could potentially be accessed via an open debugging port or through a serial console, and often this access is at a privileged level of operation. If developer-level access is needed in production devices, then debugging and test interfaces should be appropriately protected to require authorization of privileged users.
  • Protect cryptographic keys and other secrets. Any cryptographic keys or passwords that can provide an unauthorized, elevated level of access to vehicle computing platforms should be protected from disclosure. Any key from a single vehicle’s computing platform shouldn’t provide access to multiple vehicles. This implies that a careful key management strategy based on unique keys and other secrets in each vehicle, and even subsystem, is needed.
  • Control vehicle maintenance diagnostic access. As much as possible, limit diagnostic features to a specific mode of vehicle operation to accomplish the intended purpose of the associated feature. Design such features to eliminate or minimize potentially dangerous ramifications should they be misused or abused.
  • Control access to firmware. Employ good security coding practices and use tools that support security outcomes in their development processes.
  • Limit ability to modify firmware, including critical data. Limiting the ability to modify firmware makes it more challenging for bad actors to install malware on vehicles.
  • Control internal vehicle communications. Where possible, avoid sending safety signals as messages on common data buses. If such safety information must be passed across a communication bus, the information should reside on communication buses that are segmented from any vehicle ECUs with external network interfaces. For critical safety messages, apply a message authentication scheme to limit the possibility of message spoofing.

The NHTSA cybersecurity best practices report provides a good starting point to fortify automotive applications. However, it is neither a recipe book, nor is it comprehensive. NHTSA also recommends that the industry follow the National Institute of Standards and Technology’s (NIST’s) Cybersecurity Framework, which advises on developing layered cybersecurity protections for vehicles based around five principal functions: identify, protect, detect, respond, and recover. In addition, standards such as ISO SAE 21434 Cybersecurity of Road Vehicles, which in some ways parallels the ISO 26262 functional safety standard, also provide important direction.

Helping You Secure Your Automotive SoC Designs

Vehicle manufacturers have differing levels of in-house cybersecurity expertise. Some still opt to add a layer of security to their automotive designs near the end of the design process; however, waiting until a design is almost completed can leave points of vulnerability unaddressed and open to attack. Designing security in from the foundation can avoid creating vulnerable systems (see the figure below for a depiction of the layers of security needed to protect an automotive SoC). Moreover, it’s also important to ensure that the security will last as long as vehicles are on the road (11 years, on average).

Layers of security needed to protect an automotive SoC.

With our long history of supporting automotive SoC designs, Synopsys can help you develop the strategy and architecture to implement a higher level of security in your designs. In addition to our technical expertise, our relevant solutions in this area include:

Connected cars are part of this mix of things that should be made more resilient and hardened against attacks. While functional safety has become a familiar focus area for the industry, it’s time for cybersecurity to be part of the early planning for automotive silicon and systems, too. After all, you can’t have a safe car if it is not also secure.

To learn more visit Synopsys DesignWare Security IP.

Also read:

Identity and Data Encryption for PCIe and CXL Security

High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems

Lecture Series: Designing a Time Interleaved ADC for 5G Automotive Applications


Are We Headed for a Semiconductor Crash?

Are We Headed for a Semiconductor Crash?
by Daniel Nenni on 02-02-2022 at 6:00 am

Malcolm Penn Webinar 2022

COVID was certainly a black swan event but semiconductors have seen similar events over the past 50 years, some of which I have experienced personally. The Dot-com bubble comes to mind but there were others. The question is will history repeat itself and the answer, according to Malcolm Penn of Future Horizons, is yes.

Malcolm is a longtime friend, colleague, and one of my trusted few. I used to attend the live version of his Annual Industry Update and Forecast here in Silicon Valley but now it is virtual like everything else semiconductor. Malcolm has also been a guest on our Semiconductor Insiders Podcast: Podcast EP40: The Semiconductor Supply Chain and the Real Cause of Semiconductor Shortages.

For the 2022 Update Malcolm spent an hour covering 33 slides in great detail including his previous high end prediction of a 24% increase in semiconductor revenue for 2021. It ended up closer to 26% but he was closest (I was at 10-15%). The most important part of the presentation to me was the historical look at the semiconductor industry and his prediction for 2022. Spoiler alert a crash may be coming.

You can get his complete slide deck HERE for 150GPB which is quite the deal if you consider the time invested, absolutely. A highlights reel is at the bottom of this page.

If you look at his opening slide you can see the historical ups and downs including the 2020 Dot-com bubble I mentioned earlier. One of his slides shows the previous up and down turns since 1961 in more detail. While the current bust of 2019 and boom of 2021 doesn’t quite measure up to the Dot-com cycle it is still significant. This sets up the Perfect Storm slide #6 in the deck and after a decade of single digit growth you really have to wonder.

Malcolm also mentioned EDA but let’s look at that in more detail. EDA is also a single digit growth industry but lately, as you have read on SemiWiki, EDA growth has been booming with double digit growth.

ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

ESD Alliance Reports Double-Digit Growth – The Hits Just Keep Coming

Is EDA Growth Unstoppable?

The reasoning is twofold: First and foremost, systems companies are rushing to do their own chips and this now includes automobile companies. The chip shortage is a big driver but the increasing software burden of the systems companies is a close second. Automated cars now include millions of lines of code and this code can be developed and optimized in parallel with chip design. The smart phone companies figured this out a long time ago when Apple and others started doing their own SoC chips.

Second, venture capital has been pouring into the chip sector at record rates. AI is a big driver for startup chip companies and electronic vehicles is another bubble waiting to pop. Last told there were 300 companies developing AV/EV related products, I mean WOW. Again,  Déjà vu Dot-com bubble.

Malcolm then moves onto key drivers, their impact and roles. The key drivers are the Economy, Unit Demand, Capacity, and ASPs. Malcolm goes into detail but I will make an additional comment on capacity.

We have capacity, that has never been the problem. Utilizing that capacity is another story. For example, in 2019 TSMC saw a -7% downtick in automotive chips and another -7% in 2020. That is why the car companies did not have enough chips, cancelled orders. In 2021 TSMC saw a 51% uptick in automotive and 2022 will probably be the same since inventories are building like never before.

But the chip shortage narrative continues and so does the CAPEX contest between Intel, TSMC, and Samsung. The biggest difference is that TSMC builds capacity based on customer orders with some big pre pays and the IDM foundries do not. TSMC is also building big capacity for Intel which complicates things a bit. The one saving grace is that the equipment companies are going to have a difficult time equipping all of these new fabs with the supply chain issues they are suffering. Especially ASML and EUV. No way will they be able to outfit all of the new leading edge logic and memory fabs that are in the press release phase.

Malcolm continues with his agenda and has us approaching the top of a rollercoaster. He does not show how steep the drop off is but he is convinced it is coming. He has a nice graphic for that one as well. His forecast for 2022 is 4% at the low end and 14% at the high end. I’m a bit more optimistic with a 10-15% industry forecast with TSMC again hitting the 20%+ growth rate.

Malcolm finished with key takeaways and the Q&A. For me it’s all about the supply chain which Malcolm covers in detail. When the dust settles and COVID is under control we will see a much stronger supply chain that will not be schooled again, just my opinion of course.

Also read:

The Roots Of Silicon Valley

The Semiconductor Ecosystem Explained

Are We Headed for a Semiconductor Crash?

 

 

 


A 2021 Summary of OpenFive

A 2021 Summary of OpenFive
by Kalar Rajendiran on 02-01-2022 at 10:00 am

Key Features of the Edge AI Vision Platform

Building a better mousetrap plays a key role in achieving market success in any industry. Of course, building one requires differentiating the product from the others already in the market. A differentiated product can even lead to creating demand for new products in adjacent markets. All of this is great but how do you implement the differentiation? In the semiconductor industry, it is through a combination of custom circuits and software. While custom silicon solutions as a market segment has been around for a long time, it certainly has gone through its waxing and waning phases. The partition of customization implementation between hardware and software is an ongoing balancing act and is dictated by various driving factors.

Currently, there are a few trends that are reinvigorating the custom silicon solutions industry. The move toward domain-specific architectures, more processing at the edge and edge-AI accelerators are some of those trends. The spectrum of power, performance, area (PPA) and latency requirements demand custom silicon. These factors are driving innovations in design methodology, interfaces, and packaging to name just a few areas. Custom silicon solution companies will be playing a critical role over the coming years.

Who is OpenFive?

OpenFive is a full-service custom silicon solutions company with differentiated IP that offers a proven path from custom SoC architecture to volume silicon. More below from their website.

“OpenFive offers end-to-end expertise in Architecture, IP Integration, Design Implementation, Software, Silicon Validation, and Manufacturing to deliver high-quality silicon in advanced nodes down to 5nm. With spec-to-silicon design capabilities, customizable platforms, and differentiated IP for Artificial Intelligence, Datacenter/High Performance Computing, Networking, and Storage applications, OpenFive is uniquely positioned to deliver highly competitive domain-specific processor-agnostic SoCs customized for your application.”

You can learn more about OpenFive from an interview of their CEO that SemiWiki published..

With the above as the backdrop, let’s review how OpenFive fared in 2021.

As an overview, they completed a year filled with interesting accomplishments and are poised for continued growth in 2022. Some of their 2021 accomplishments are highlighted below.

Chiplet-optimized IP Subsystems

With unrelenting time to market pressures, the key to increased productivity is leveraging pre-verified IP subsystems. OpenFive offers many different IP subsystems addressing connectivity and memory interfaces. You can learn more at their IP portfolio page.

With chiplets-based design implementations gaining momentum, OpenFive developed a die-to-die (D2D) IP subsystem. The subsystem supports low-power, high-throughput, and low-latency links enabling quicker integration for heterogenous chipset connections in wired communications, AI and HPC applications. You can learn more from a SemiWiki post published last year.

Edge AI Vision Platform

With the market trends noted earlier, standard chips will have difficulty meeting the wide range of performance, power and latency requirements. A custom system-on-chip (SoC) tailored for a specific edge application will deliver a competitive advantage in the marketplace. The competitive differentiation is contained within a portion of a SoC with the bulk holding necessary functions but not the differentiators.

Given the above, it makes sense to start with a platform that includes preconfigured subsystems and customize with one’s key differentiators. The customization could be limited to just adding your own custom accelerators. Or it could include customizing the different subsystems and mixing and matching with different interfaces to meet an application’s unique requirements.

With a long list of edge AI applications, OpenFive launched an Edge AI Vision platform that makes it easier for building customized edge AI SoCs. You can learn more from a SemiWiki post published last year.

 

OpenFive offers a turnkey solution by handling the entire process from design to IP procurement to manufacturing, test, prototypes and production.

Processor IP

While OpenFive as a custom silicon solutions provider is processor agnostic, they do have a front-row seat when it comes to RISC-V. With a multitude of RISC-V SoC designs and tapeouts, OpenFive is in a vantage position to translate the advances SiFive makes into custom silicon solutions. With the tremendous momentum behind RISC-V, this could be a major differentiator for customers. For example, SiFive has been enhancing RISC-V vector extensions to accelerate performance on ML workloads. You can learn more from a SemiWiki post published last year.

Leading-Edge Design to Production Capabilities

OpenFive offers custom silicon services on a wide range of foundries and process nodes including leading-edge process nodes. They announced a successful tapeout of a high-performance SoC on TSMC’s N5 process. The SoC is targeted for HPC/AI, networking and storage solutions and includes their HBM3 IP subsystem, D2D IP subsystem as well as SiFive’s E76 32-bit CPU core. You can learn more from their press release here.

Commitment

Even during the pandemic-stricken year, their consistent presence at industry and partner events is commendable.

You can review their extensive set of resources/collateral in the form of on-demand webinars, technical papers, brochures here.

Also Read:

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Die-to-Die Interface PHY and Controller Subsystem for Next Generation Chiplets

Enabling Edge AI Vision with RISC-V and a Silicon Platform

 


WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama

WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama
by Don Dingee on 02-01-2022 at 6:00 am

Flat Z design and voltage ripple example in board-level EM simulation

Advanced board designs are fertile ground for misbehavior in time and frequency domains. Relying on intuition, then waiting until near-final product for power integrity (PI) or EMI testing almost guarantees board respins are coming. Lumped-parameter simulations of on-board power delivery networks (PDNs) struggle with predicting behavior in the face of parasitics. A new series of Keysight engineering webinars dive inside PDNs with critical insights on where PI problems develop. Results from Keysight’s board-level EM simulation and EMI compliance analysis tools and techniques can help teams reduce late respin drama.

Modeling on-board power delivery

One of these webinars tears into decoupling capacitors. If a few are good, more must be better, right? That myth gets busted quickly, and so does another. Capacitor manufacturers want engineers to think they’re buying a certain value of a capacitor. Instead, what every manufacturer ships is a bundle of impedance with resistance, inductance, and capacitance.

Now multiply by a bunch of decoupling caps of different values across a board. Clean power delivery and controlled EMI depend on how those capacitors interact with loads, and each other, at various frequencies. A PDN provides three sources of power:

  • The voltage regulator module (VRM), which resembles a low-pass filter. At higher frequencies, the VRM control loop falls behind and its output impedance goes inductive.
  • The decoupling capacitors, which become a bandpass filter as they also go inductive beyond their self-resonant frequency.
  • Any on-chip capacitors and parasitic package capacitance, not large enough to supply power at lower frequencies but able to stabilize delivery as frequency increases.

Minimizing voltage ripple and preventing the “rogue wave” resonance calls for flat impedance from those three sources over a range of frequencies.

The discussion continues with a look at the pitfalls of lumped-parameter EM simulations, then wraps up with an in-depth FPGA placement case study. “The simulations are only as good as the models,” says Keysight’s Heidi Barnes, SI/PI Application Engineer. Keysight helps vet and optimize capacitor models used in its PathWave ADS PIPro EM simulation tool – for example, models with capacitor package mounting inductance removed improve results. The EM field solver in PIPro generates an S-parameter model, including all components and parasitic effects, enabling board-level EM simulation matching real-world measurements.

Going back to the power source with board-level EM simulation

Another webinar in this series looks at one specific type of design: switched-mode power supplies (SMPS). Switching magnetics reduces size and improves efficiency but throws off noise and EMI as part of the process. Power supplies also fall under a wide range of EMI compliance specifications in different geographic markets and application segments. Designs again often fall victim to expensive late-stage test failures and respins, especially when moving from one market where testing passed to another market with a different profile.

Noise correlates to the high switching speeds and currents, or high di/dt in industry terms. Square-wave switching waveforms also throw off strong harmonics at even higher frequencies, up into RF ranges. Wide band gap (WBG) transistor technology, such as GaN or SiC, is also changing the picture. These technologies lower impedance and enable higher switching speeds. But, parasitic effects once buried by more dominant terms now rise to the surface.

“The pre-layout schematic simulation using Spice gives you a best-case result,” according to Steven Lee, Product Manager for Keysight PathWave Power Electronics. That can trap SMPS engineers in a complex test-respin cycle where differential noise, common mode noise, resonance and harmonic spikes are hard to predict and mitigate. Adding EMI filters can have unexpected effects as capacitors and board parasitics reveal their impedance characteristics.

A more effective approach for better EMI results is post-layout simulation and layout adjustments, introducing minimal EMI filtering. Using Keysight PathWave ADS PEPro brings layout-based design and board-level EM simulation technology optimized for power electronics, with similar field solvers for parasitic extraction. Steven walks through a demo of PEPro during the webinar.

Analyzing EMI compliance quickly 

A big workflow improvement is the EMI compliance overlays in PEPro. Instead of manually scanning and comparing EMI plots to specifications, pre-loaded compliance profiles including FCC Class A and B, CISPR 22 Class A and B, and CISPR 25 Class 1 through 5 can drop over simulation results with a couple of clicks.

Tightening the loop between developing higher fidelity EM models, making board layout adjustments accounting for parasitics and impedance behavior over frequency, and visualizing EMI compliance virtually before moving to physical test are big breakthroughs.

“You should be board-level EM simulating”

The main takeaway from this Keysight webinar series is power integrity and power electronics teams should be simulating to know more about their designs sooner. If teams had prior experience simulating with less accurate models and lumped parameters, it’s time to bring in better board-level EM simulation tools that can help shift-left virtual testing, leaving the late respin drama behind.

 

Webinars available on-demand now:

Optimizing Capacitor Selection and Placement for Power Integrity

Reduce EMI in Switched-Mode Power Supply Design

 

Future live events:

Both Heidi and Steven have new, live webinars coming with more thoughts on PI and EMI in the coming months – watch the main Keysight Webinars page for these and other live events.

Also Read

Shift left gets a modulated signal makeover


Faster Time to RTL Simulation Using Incremental Build Flows

Faster Time to RTL Simulation Using Incremental Build Flows
by Daniel Payne on 01-31-2022 at 10:00 am

lump sum build min

I’ve been following Neil Johnson on Twitter and LinkedIn for several years now, as he has written and shared so much about the IC design and verification process, both as a consultant and working at EDA vendors. His recent white paper for Siemens EDA caught my eye, so I took the time to read through the 10 page document to learn more about the build flow choices now available to SoC teams.

Modern RTL languages most be compiled before they are ready to simulate, so turnaround time is always important. For engineering teams using functional simulation from Siemens EDA, they use the Questa simulator, and there are a few ways to approach build flows.

Lump Sum Build Flow

The simplest build flow has all of your input files (modules, packages, Testbench, DUT) compiled into a single library.

Lump Sum Build

This approach works efficiently for design complexity up to the sub-system level, and teams with a handful of engineers can handle any dependency issues. Typical build times are on the order of seconds, so it’s a good method.

Partitioned Compile

If your SoC team uses much design IP integration and testbench layering, then consider using the partitioned compile, where libraries are compiled separately and then gathered into a single simulation input.

Partitioned Compile

As your sub-systems grow larger in size, then compile times becomes faster by moving from the lump sum to a partitioned compile.

Parallel Compile

By partitioning your library and using more cores in parallel, then build times are further reduced, so it’s the classic tradeoff of number of EDA licenses versus time.

Pre-optimized Design Units (PDU)

Parts of an SoC hierarchy can be compiled independently into PDUs, then used as input to simulation. These PDUs are areas of the design that are rarely changing, so this avoids having to re-compile a stable design area. With this build approach you are only compiling the areas of code where changes have been made.

Pre-optimized Design Units (PDU)

Elaboration Flow

Elaboration is where modules are bound to module instances, model hierarchy is built, parameter values are computed, hierarchical names get resolved, and nets are connected. With Questa you create an elaboration file on the first test run then just re-use that file in the following runs.

Elaboration Flow

Command Line Front-end

Questa users have the Qrun tool to compile their designs, so here’s how it works for each of the build flow choices.

Lump Sum Build

Specify your testbench files and design files, then qrun will compile, optimize and simulate.

qrun -f testbench_files.f -f design_files.f

Partitioned Compile

To separate testbench and design files in the partitioned compile build flow, use qrun with this syntax:

qrun -makelib testbench_library -f testbench_files.f -end \
     -makelib design_library -f design_files.f -end

Parallel Compile

Compiling in parallel across multiple cores with Qrun looks just like the partitioned compile with one extra option:

qrun -makelib testbench_library -f testbench_files.f -end \
     -makelib design_library -f design_files.f -end \
     -parallel

Pre-optimized Design Units (PDUs)

Using Qrun for the PDU build flow adds the makepdu option:

qrun -makelib testbench_library -f testbench_files.f -end \
     -makelib design_library -f design_files.f -end \
     -makepdu design_top design_library.top -L design_library -end \
     -parallel

Elaboration Flow

The first time through the elaboration flow creates an elaboration file, then subsequent runs reuse the elaboration file:

qrun -makelib testbench_library -f testbench_files.f -end \
     -makelib design_library -f design_files.f -end \
     -makepdu design_top design_library.top -L design_library -end \
     -parallel \
     -elab elaboration.output -do “quit”

qrun -load_elab elaboration.output -do “run -all; quit”

Summary

Build flows are an important part of what SoC teams do in getting a design completed and verified in the shortest time possible. With Questa there are multiple build flows to choose from, depending on the complexity and composition of your project. The Qrun command line tool supports each build flow choice with a simple syntax, and it’s even smart enough to incrementally compile only the files that have changed or when dependencies change.

Read the complete 10 page White Paper online here.

Related Blogs


Breker Attacks System Coherency Verification

Breker Attacks System Coherency Verification
by Bernard Murphy on 01-31-2022 at 6:00 am

System Coherency min

The great thing about architectural solutions to increasing throughput is that they offer big improvements. Multiple CPUs on a chip with (partially) shared cache hierarchies are now commonplace in server processors for this reason. But that big gain comes with significant added complexity in verifying correct behavior. In a shared memory model, the value stored in a logical memory address appears not only in main (DRAM) memory but also potentially in multiple on-chip caches and even possibly in buffers in the coherency fabric. Which raises a consistency issue – for a given logical memory address these should all contain the same value under all circumstances, but do they? Breker attacks this coherency verification problem through their test synthesis technology, looking at full system coherency in heterogenous systems.

The challenges in coherency

The logical memory view can become inconsistent when two or more processors are working with their own local copy of a value at a shared address and one updates its local value, unknown to the other. Chaos ensues as CPUs each work with their own view of reality. The coherent fabric itself has structure supporting data in-flight, such as write-back buffers. Coherency verification must also deal with these. And then of course peripherals can write directly to main memory through DMA, unknown to internal caches.

Clearly some mechanism is needed to keep shared values in sync when required. But it must be light-handed. Synchronization comes with a latency penalty which can significantly reduce the performance advantage of caching, unless used only when absolutely needed. The latency problem is compounded further when you consider shared memory multi-socket processor boards interconnected via CXL. Adding more complexity to solutions.

Clever techniques are used to spy on cache contents and changes, to determine when a synchronization update is necessary. These include snooping and directory-based coherency systems, which tag cache addresses (more exactly cache lines) as being clean (coherent) or invalid, with a variety of refinements. These methods must walk a fine line between minimizing net performance impact while ensuring no possible escapes. Escapes being possible cases in which a non-coherent condition can survive.

The Breker System Coherency TrekApp

To check that the design does not cross that fine line, verification engineers must independently construct tests which they believe will cover all possible cases. Cache, fabric and IO coherence across all control variations (power, interrupt, clocking, etc) through which the design might cycle. That’s where the Breker Trek System Coherency app comes in.

Adnan Hamid, founder of Breker, started many years ago in coherency verification at AMD. The ideas he developed there around cache coherence verification and system verification methods in general he built into Breker. The coherence solution expanded over time to also include fabric and IO coherence and interaction with power switching, etc. After proving this capability out with a few lead customers, Breker announced the product at the recent DAC 2021 in San Francisco.

Adnan offers an insight: to know how to achieve meaningful coherence verification, you first must know how to measure coverage. As with any system-level coverage objective, RTL coverage metrics aren’t helpful. More useful is coverage first of the coherency manager state machine per cache, with variants in cache values and address stride, then a similar coverage for cross-cache interactions, then coverage across a synthetic set of software-based torture tests, crossed with power and other transitions, running on an emulator. The System Coherency TrekApp supports all of this.

What about escapes?

Talk to anyone working with coherent designs and they’ll all tell you they find coherency problems post-silicon. Getting close to that fine line without crossing it is really quite difficult. After all you’re trying in pre-silicon verification to model a vast state space with, in comparison, a tiny set of tests, even if you run tens of thousands of tests. Given that exhaustive testing is not even remotely possible, the trick is to find the best practical set of tests to run. Since this will be unavoidably incomplete, the System Coherency TrekApp extends even to post-silicon, helping to diagnose silicon failures. Perhaps a power transition in the middle of synchronization. Or an interrupt unfortunately timed against a tag update. In Adnan’s view, this post-silicon learning will help refine the pre-silicon verification coverage plan. To reduce if not eliminate post-silicon escapes.

Interesting stuff, incidentally supported for both Arm-based and RISC-V-based systems, recenly endorsed in a press release with Nuclei System Technology. You can learn more HERE.

Also Read:

WEBINAR: Adnan on Challenges in Security Verification

Breker Tips a Hat to Formal Graphs in PSS Security Verification

Verification, RISC-V and Extensibility


KLAC- Great quarter and year – March Q is turning point of supply chain problem

KLAC- Great quarter and year – March Q is turning point of supply chain problem
by Robert Maire on 01-30-2022 at 10:00 am

KLAC Foundry Logic

-KLAC – great QTR & calendar year but supply chain impacted
-Management feels supply chain to improve after March Q
-Demand remains strong, driven by foundry/logic
-Process management is next best place in industry after litho

Great end to calendar year

KLA reported revenues of $2.53B with non GAAP EPS of $5.59 nicely exceeding street expectation of $2.33B and EPS of $5.45. Guidance was muted due to supply chain issues at $2.2B +-$100M and Non GAAP EPS of $4.80+-$0.45. This is versus expectations of $2.37B and $5.50 in EPS

March is worst of supply chain impact

Management was clear and adamant that March would be the worst of the supply chain impact and that things would improve going forward for the remainder of the year . The company estimated that the March quarter would see an 8-10% negative impact on revenue. Importantly that revenue would likely ship in June creating the uptick.

This is certainly in contrast to Lam that didn’t identify a clear end to their issues and seemed to be more open ended as to how long there would be supply chain issues. While we are certainly not happy to see the issues finally crop up we feel better that the impact seems to be for one quarter with most all the revenue just slipping into the next quarter.

In KLA’s case, there is essentially zero likelihood that KLA will lose any revenue to competitors as they supply very unique products and are certainly less interchangeable as compared to dep and etch products.

Process control continues to outperform overall WFE

Process control tools such as those made by KLA continue to grow faster than the overall market as rapidly increasing process complexity requires more process control at higher costs as we continue to push the limits of physics.

Process control follows litho sales and complexity and is somewhat of a shadow proxy for ASML’s sales and growth. Wafer and especially reticle inspection are driven by the increasing lithographic challenges. We see this out performance in the mid to high single digits continuing in 2022.

Being a play on foundry helps in the current environment

While Lam remains the poster child for memory manufacturing so too does KLA remain the poster child for foundry/logic which was 79% of business.
The huge bump up in spend by TSMC coupled with what will likely be a large bump up by Intel as well, will clearly benefit KLA as those are two key and significant customers.

While memory spend remains solid it is also conservative as the industry wants to have supply and demand remain in balance. The challenges in 3D NAND are clearly one of the big drivers of process control in the memory space.

Backlog is Beautiful

KLA has historically had good backlog which enables them to dial in and control their numbers better than most in the industry. We know that some KLA products are quoting deliveries of over a year and a years backlog in products at this point given such strong demand is not out of the norm.

While KLA’s backlog may not exactly be like ASML’s its not far off. KLA obviously has the added benefit of superb gross margins. The current super strong demand environment coupled with the constrained supply chain will keep backlog high and likely growing. Although the supply chains issues may get better after the March quarter , we think backlog will remain high due to current demand which will not diminish.

The Stock

Investors will obviously not like the weak guide for the March quarter but the negative impact on the stock should be more muted as the worst of it will be March and things will pick up after that with the revenue just slipping into June.
Obviously the overall market sentiment and volatility is quite horrible so the limitation of the impact to a single quarter may not matter as investors are just in a general supply chain panic.

We could see some collateral help from Apple talking about supply chain issues improving which would lend credence to KLA’s view of March as the low point with the rest of the year up from there.

The stock has lost quite a bit for such a high quality name which makes us feel more attracted to it especially if it were to trade off too sharply.
Unfortunately the recent volatility continues to reduce predictability and makes investors wary of even high quality stories.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

LRCX- Supply Chain Catches up with Lam- Gets worse before better- Demand solid

ASML Too Much Demand Plus Intel and High NA

Forty Four Billion Reasons Why TSMC Remains Dominant