RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Achieving Scalability Means No More Silos

Achieving Scalability Means No More Silos
by Mike Gianfagna on 07-01-2021 at 6:00 am

Achieving Scalability Means No More Silos

This is a story of contrasts and counter-intuitive results. Perforce recently published a white paper discussing enterprise scalability – what it takes, why it’s important and what can get in the way. The discussion will shake up some long-held notions regarding effective project management. The results can be significant, so it’s worth a look. Beyond the white paper, there is also a blog where you can learn more. Links are coming, but first let’s look at why achieving scalability means no more silos.

We Always Did It This Way

Managing tasks with a project-centric view is a natural way to keep track of things. Often, a new project starts by loading information into a requirements management tool. Bug tracking and design management tool are typically deployed in the early stages of a project as well. The whole process seems natural if you consider yourself working on one project at a time. Items like cost, resources, and timelines are also typically tracked in most enterprises on a project basis.

Projects are typically isolated from one another with this approach and that’s where the problems start.

What’s the Problem?

Because projects are isolated from one another, project “silos”, or local data repositories develop. These silos are typically not integrated with other project silos, so enterprise scalability becomes difficult. So does collaboration. The lack of integration also erodes the ability to perform traceability. Let’s examine a few of these challenges in more detail.

Most projects consist of a lot of IP reuse, so there is typically a need to access IPs and blocks from other projects. Lacking a central IP management system, this access is typically accomplished by linking to IPs and blocks using mechanisms like Git “submodules” or Subversion “externals”. Most project-centric data management tools support these functions, and the approach often works.

Or does it?

While this approach seems effective at first glance, challenges for the project and IT support teams can develop. Ad-hoc, peer-to-peer, untracked dependencies have many negative consequences. With each new project, a new design management server gets instantiated. These servers typically persist for a long time since no one wants to remove or delete the server, even after the project is finished. Fear of unknown consequences kicks in.

To make matters worse, many projects spawn other projects and can linger on for years, even decades. After a while, enterprises can have hundreds to thousands of servers running. It’s quite difficult for IT to characterize the impact and importance of any given server. More fear of unknown consequences.

Let’s stay with the IT challenges for a moment. Upgrading servers, performing security patches, and managing the hardware can all take days to weeks. Downtime creates a lot of stress. In the white paper, Perforce reported that one medium-sized customer commented that they had simply given up on updating their servers when the number hit 300 and eventually decided to scrap the system altogether.

You should start to see the problem with what seems like a perfectly reasonable approach to project management. The white paper goes on to discuss many more shortcomings with the typical approach. I encourage you to get your copy and read the discussion first-hand. A link is coming. To whet your appetite, here are some of the topics that are discussed:

  • Tracking complex project dependencies is difficult when there is no big picture
  • Collaboration becomes challenging – lots of interdependent permissions to manage
  • Licensing is hard to keep track of, potentially buying IPs the organization already owns or using IPs in applications that are not allowed
  • Issue tracking has limited impact – what other projects will see that bug?
  • Export control – no one wants to be on the wrong side of these rule

You should start to see the pitfalls of a perfectly reasonable approach to project management.

What Should You Do?

In a nutshell, break down your project silos. Adopting an IP-centric management approach addresses the headaches cited above and results in superior enterprise scalability. If your company will ever work on more than one project, you need to consider these strategies. The white paper outlines the benefits and suggests a way forward. You can even connect with an expert to discuss your options. You can get your copy of the white paper here. You can also learn more from a recent blog posted by Perforce here.  Perforce has studied this problem for quite some time, and they bring a lot to the table. You can learn more about what they’re up to in SemiWiki’s coverage here. Now you know why achieving scalability means no more silos.

Also Read:

Your IP Portfolio is Probably Leaking. What Can You Do About It?

Perforce Embedded DevOps Summit 2021 and the Path to Secure Collaboration on the Cloud

Single HW/SW Bill of Material (BoM) Benefits System Development


Safety + Security for Automotive SoCs with ASIL B Compliant tRoot HSMs

Safety + Security for Automotive SoCs with ASIL B Compliant tRoot HSMs
by Kalar Rajendiran on 06-30-2021 at 10:00 am

New Architectures Reshaping Auto SoCs

Automotive segment is a market that has historically been supported by a few select suppliers within the semiconductor ecosystem. Over the last decade, this market has transitioned from just being about reliability, performance, fuel efficiency, etc., to placing equal importance to user experience. This user experience includes in-cabin experience in terms of comfort, convenience, connectivity, driving assistance, safety and security. More semiconductors are needed to deliver this user experience. In particular, Advanced Driver-Assistance Systems (ADAS) and autonomous driving initiatives are major factors behind the increased semiconductor content of automobiles. Consequently, more players have been attracted to support this market. Converting the opportunities into profitable revenue depends on how well the application, product and market challenges are overcome.

It is interesting that standardized compliance requirements for automotive electronics started only as recently as a decade ago. The ISO 26262 standard for functional safety of electrical/electronic systems installed in automobiles was defined in 2011 and revised in 2018. This timeline aligns with when we started depending more on electronics to tell us the status/conditions of the automobile. For example, we now depend on electronics to tell us engine oil level, oil condition, tire inflation pressure, etc.

Historically many semiconductor ecosystem suppliers stayed away from supporting automotive industry because of stringent product operating conditions requirements. Products were prone to systematic and random faults as they are exposed to changing environmental conditions during operation of the vehicle. As if these challenges are not enough, cybersecurity risks got added to the list as vehicles depend more and more on electronics for delivering safety and security to people and vehicles. Regulations and compliance requirements relating to automotive electronics are fast evolving. As an example, the ISO 21434 standard is being defined for addressing cybersecurity risk management of automobile systems.

The semiconductor supplier one chooses has to be firmly committed to supporting the automotive market. The supplier has to invest in keeping pace with fully supporting defined standards and compliance requirements. It is in this context that Synopsys’ recent announcement of their ASIL B compliant tRoot HSMs becomes even more significant.

Let’s look at automotive electronics challenges and how Synopsys’ product offerings address these challenges.

Challenges

The architectures behind automotive electronics systems are fast evolving. Traditional microcontroller-based solutions are not a match for this kind of compute workloads. Refer to Figure 1. The evolving architectures involve more and more sensors and actuators and sensory data fusion to make decisions. The architectures themselves are evolving from being domain-centric to more centralized processing and control of the vehicle.

Figure 1:

Any erroneous data introduced into the processing may lead to disastrous results. The solutions that are implemented should have a fool-proof way of managing systematic and random faults. As if these traditional types of faults are not enough, a new type of fault is becoming more prevalent. And that is the “malicious attacks” fault. Refer to Figure 2 for definition of these types of faults.

As a backdrop, according to the AV-TEST Institute, the number of malware programs (not just automotive related) has climbed from around 65 million in 2011 to 1.1 billion by the end of 2020. And cybercrimes involving automobiles, although not a large number in absolute terms, is growing rapidly.

Secure automotive systems must be able to handle malicious attacks, better still prevent these attacks from taking place in the first place. Just imagine, if a malicious attack were to target the ADAS or autonomous-driving system of a vehicle. The 2,000-6,000 lbs automobile could be converted into a deadly weapon.

Figure 2:

 

Synopsys’ Solutions

Synopsys recently announced that it has extended its DesignWare® IP portfolio with offerings to address the safety and security requirements of automotive designs. The value of an offering is determined based on a number of criteria. Can it implement a particular solution, can it implement the solution easily and efficiently, can it implement the solution cost advantageously, and does it have a long-term support and technology roadmap? Is the technology a core competency and focus for the supplier? And most importantly, does the offering support and comply with defined automotive safety and security standards? The answer to all of these questions is in the affirmative.

Definition: tRoot Hardware Security Modules (HSMs) provide a Trusted Execution Environment (TEE) to protect sensitive information and processing and implement security-critical functions such as secure boot, storage, debug, anti-tampering and key management required throughout the device life cycle.

Synopsys’ Hardware Security Module (HSM) IP with root of trust helps defend against malicious attacks. The automotive variant of tRoot HSM adds a broad range of safety mechanisms to its security features, including dual-core lockstep, memory ECC, register EDC, parity, watchdog, self-checking comparators, bus and MPU protection, and dual-rail logic. It also incorporates an ASIL D compliant low-power ARC processor. Refer to Figure 3. By designing automotive SoCs using Synopsys’ tRoot HSM and Processor IP, next generation automotive vehicles can expect to manage random and systematic faults and fend off cyberattacks.

Figure 3:

 

Summary

Synopsys’ standards-compliant tRoot HSMs for automotive satisfy the latest market demands and enable SoC designs to quickly implement safe and secure solutions. If you are involved in designing electronics that go into automobiles, you would want to explore their IP offerings. Leveraging their IP should make it easier to get your products to market faster.

Also Read:

IoT’s Inconvenient Truth: IoT Security Is a Never-Ending Battle

Upping the Safety Game Plan for Automotive SoCs

PCIe 6.0 Doubles Speed with New Modulation Technique


What’s New with UVM and UVM Checking?

What’s New with UVM and UVM Checking?
by Daniel Nenni on 06-30-2021 at 6:00 am

UVM and UVM Checking

About once a quarter, I touch base with Cristian Amitroaie, CEO and co-founder of AMIQ EDA, to see what’s new with the company, products, and users. Sometimes he surprises me, as he did earlier this year when he mentioned that their tools check about 150 rules for non-standard constructs in SystemVerilog and VHDL. When we talked last week, he surprised me again when he said they have announced a bunch of new rule checks for compliance to the Universal Verification Methodology (UVM). My response was that UVM has been around for years, and I know that AMIQ EDA has offered rule checks for it, so what could possibly be new? Cristian’s answers were quite interesting.

For a start, he reminded me that an updated version of the UVM standard, IEEE 1800.2-2020, was released last September. As with any new version of any standard, there are new features and new functionality included. The UVM standard documents an application programming interface (API) used by chip verification teams to write simulation models, testbenches, and tests. Thus, the new features are enabled by additional API functionality in the UVM standard. EDA vendors have been adding support for it, and users have started using it, so there are additional API rules checked in the AMIQ EDA Verissimo SystemVerilog Testbench Linter.

But the story is more complicated and intriguing than that. The new UVM release has also removed some outdated parts of the API, while marking others as “deprecated” and slated for removal in the future. This is common practice with updates to language and library standards, but it makes keeping up to date more challenging. Verissimo alerts verification engineers when they use deleted or deprecated functionality so that they can update their code before simulation tools drop support. Other API changes are more subtle in the 2020 standard, such as altered arguments or classes turned abstract that can no longer be instantiated. Verissimo has also added rule checks for compliance with these changes.

One usual aspect of UVM is that IEEE provides both the standard document and a “reference implementation” of the API library written in SystemVerilog. Cristian said that they developed their new checks based on the standard, and when they checked the reference implementation they found “a few methods, classes, macros, etc.” that were not annotated consistently with the latest version of the standard. He summarized the scope of “problematic” UVM API usage that they detect:

  • Removed API = API definition that no longer exists in the IEEE library
  • Deprecated API = API definition that will be removed from the IEEE library in the future
  • Non-standard API = API definition that exists in the IEEE library, but is not documented in the standard
  • Deviation API = API definition that exists in the IEEE library but whose implementation is not consistent with the standard and should be fixed (which may lead to backward compatibility issues)
  • Contribution API = API definition that exists in the IEEE library, but is not part of the standard, that may be considered for future standardization
  • Implementation API = API definition that exists in the IEEE library, but is neither part of the standard nor considered for future standardization

So, it seems to me that the level of checking that Verissimo provides helps IEEE create a better reference implementation library and helps users switching versions. Verification engineers clearly see the value in being able to check whether existing testbenches will work with the latest revision to the standard. UVM has a long history by now, with multiple versions released first by the Accellera EDA standards organization and later by the IEEE. There have been many API changes along the way, but there is doubtless a lot of old UVM code out there that is no longer compliant with the standard.

Cristian pointed out that the AMIQ EDA approach is not just to report rule violations, but also to propose possible fixes. For example, if Verissimo sees an API call to the deprecated “uvm_default_printer” it will recommend using “uvm_printer::get_default()” instead. Proactive suggestions are valuable when checking old code, and they also provide guidance when updating code or writing fresh code that uses the new API functionality available in IEEE 1800.2-2020. Finally, he noted that all testbench rule checks, including those for UVM, can be run in batch mode with Verissimo or interactively within the AMIQ EDA Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE).

I found this whole conversation enlightening. Standards often evolve in ways that are neither monotonic nor simple, and UVM is no exception. It’s great that users have automated assistance when migrating to the latest version. I’d like to thank Cristian for his time and helpful information. You can keep an eye on https://dvteclipse.com/uvm-ieee-compliance, which AMIQ EDA updates as they add new rules to be checked. I wish you all the best as you harness the power of UVM to verify ever larger and even more complex chips.

Also Read

Why Would Anyone Perform Non-Standard Language Checks?

Does IDE Stand for Integrated Design Environment?

Don’t You Forget About “e”


Neural Nets and CR Testing. Innovation in Verification

Neural Nets and CR Testing. Innovation in Verification
by Bernard Murphy on 06-29-2021 at 10:00 am

Instrumenting Post-Silicon Validation

Leveraging neural nets and CR testing isn’t as simple as we first thought. But is that the last word in combining these two techniques? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Automation of Processor Verification Using Recurrent Neural Networks. The paper was presented at the 2017 18th International Workshop on Microprocessor and SOC Test and Verification and are from Brno University of Technology, Czech Republic.

The authors start with the reasonable view that, as coverage improves, constrained random (CR) generation will generate more redundant tests for diminishing return. Their paper focuses on a CR CPU instruction generator with a probability-controlled distribution on a fixed set of constraints. These generate and run multiple tests, recording coverage for each run. A neural-network (NN) algorithm uses this information to adjust the generator controls. The whole process repeats for some number of cycles. The authors pre-determine weights based on a deductive method, not training. One set of weights is essentially random, another set is based on the grammatical structure of the CPU ISA.

The authors test their method against two 32-bit RISC CPUs from Codasip, one at 16k gates, another production core at 24k gates. They compare results between their two NNs, a genetic algorithm, and a default CR pattern generator without interference. The NN methods achieve about 5% higher coverage for the same runtime budget versus the default. For higher coverage levels after the knee of the coverage ramp curve, the genetic algorithm does no better than the default.

Paul’s view

Reading this paper led me down a wondrous path into early works on neural networks from the 1980’s by JJ Hopfield at Caltech’s department of chemistry and biology.

In those papers there was no NN training. Hopfield’s deductively constructed NN topologies and weights from first principles to solve a particular problem. And he solved some cool problems, for a content addressable memory and a traveling salesman problem.

In the subject paper of this month’s blog, our authors take inspiration from Hopfield and attempt to deductively construct a NN. Here using topology and weights based on the grammar of a CPU instruction set ISA to increase coverage from a constrained-random CPU instruction generator. It’s a neat idea, but ultimately doesn’t seem to add any value beyond their control NN. That NN is essentially just a balanced set of random +1 or -1 weights between nodes in their network.

However, what is intriguing from their work is that even the control NN improves coverage significantly compared to the default instruction generator. This control NN can be likened to running the constrained random generator in short bursts, each time randomly adjusting some control knob and either keeping or undoing the knob adjustment depending on whether it improved coverage or not. In essence, if you have some control knobs to a constrained random instruction generator, it’s a good idea to tweak them periodically, even if this tweaking is basically random 🙂

One thing I should add is that applying modern ML techniques to make constrained-random stimulus generation smarter is a very hot and active topic in commercial EDA today. Either to achieve higher coverage or to achieve the same coverage with dramatically less compute. It absolutely works and chip and system companies are starting to adopt it widely in production.

Raúl’s view

First, I like that this method is “non-invasive”. It aligns well with existing verification flows (this paper uses a standard UVM flow). That makes the approach incremental and practical for production use. The approach consists of generating changes to constraints for a pseudo-random generator (PRG), then have the PRG generate a set of stimuli, simulate these stimuli, and use the collected simulation data to evaluate the objective function (various types of coverage metrics), in the process optimizing neuron settings.

For the two Codasip processors they run multiple experiments with NNs of 41 and 1020 neurons respectively. From these they determine initial NN states, sigmoid steepness and epoch length. In my view, results are mixed. Comparing with other methods (Default and Genetic Algorithm), NN is slightly favorable, as measured by the coverage reached. But this is at the  cost of building an RNN. They also compute “optimal” (small with high coverage) set of stimuli, useful for regression tests. I am worried that the size of the RNN increases significantly for a small increase in design size. Also that coverage diminishes for the larger design.

That said, this is very interesting research. Today’s commercial EDA tools are starting to incorporate Machine Learning, and PRG is an application that will likely benefit.

My view

This blog nicely underlines the point of these reviews. Our goal is not to add yet another paper review. It’s to look for intriguing insights. Even if, as in this case, they’re not necessarily the main point of the paper.

Also Read

Circuit Simulation Challenges to Design the Xilinx Versal ACAP

EDA Design and Amazon Web Services (AWS)

Connecting System Design to the Enterprise


Webinar: Learn about NVMe conformance Testing

Webinar: Learn about NVMe conformance Testing
by Daniel Payne on 06-29-2021 at 6:00 am

QEMU min

Several years ago I recall upgrading my aging MacBook Pro laptop from using a Hard Disk Drive (HDD) to a Solid State Drive (SSD) that used Non-Volatile Memory (NVM). Oh what a speed improvement when pushing that On button each morning to start the work day, or clicking an App to see it launch without delay. Another epiphany for me in using SSD was at a web hosting vendor, and the new, quicker page loading times meant that I was never going back to the slow, older HDD technology.

Of course, to make our electronics industry really scale requires cooperation along with standards, and for SSD memory we look to NVM Express group, the non-profit consortium, where they have specified exactly how host software should talk with NVM using different transports, like:

  • PCI Express
  • RDMA
  • TCP

Moving from the NVMe 1.4b standard to version 2.0 adds new features, so engineers involved in designing, verifying and validating SSD systems need to keep updated. You should consider attending a webinar on July 14th, where experts from the University of New Hampshire InterOperability Laboratory (UNH-IOL) team up with Avery Design Systems to talk about conformance testing.

Webinar Agenda

Daniel Nenni from SemiWiki will provide an industry overview, and then David Woolf from UNH-IOL and Luis Rodriguez from Avery Design Solutions will provide insight about:

  • NVMe 1.4 and 2.0 standards, what’s changed
  • OCP NVMe features
  • Faster testing and validation with UNH-IOL and Avery together
  • IOL INTERACT overview and plugfest, demo on QEMU-NVMe
  • Using pre-silicon RTL simulation and running engineering regressions
  • The QEMU virtual host and SoC system co-simulation from Avery

Speaking with David Woolf I learned that NVMe can be used in mobile devices, laptops, desktops and even on servers in the data center, so quite a wide span of use cases, all that need to be tested for compliance. Instead of waiting to run compliance tests at the end of a NVM project, there’s a way to run compliance testing while your project is still in the design phase, using emulation, when it’s much easier to make changes. This is a great example of Shift-left thinking, to move testing much earlier during the product lifecycle.

NVMe Environment

Here’s a diagram of using QEMU to emulate NVMe conformance testing:

By co-simulating the SoC RTL code with the QEMU open software virtual machine, a software engineer can then develop and build their firmware, drivers and even applications to run on a Linux or Windows platform. Issues with software can be debugged with GDB or KGDB, while using the cycle accurate SystemVerilog RTL for the SoC.

Summary

Standards make the world go round, and the new NVMe 2.0 standard is ready to use for systems companies working on SSD-based electronics. Doing your conformance checking early in the design process is going to save you time and the surprise of waiting until production, when it’s too late to make design changes. Avery Design Systems has verification IP well suited to shift-left the conformance testing, along with IO INTERACT from UNH IOL.

Enjoy the webinar on July 14, 2021, starting at 1PM Eastern, 10AM Pacific time. Register online here.

Also Read:

Avery Levels Up, Starting with CXL

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions


Siemens Offers Insights into Gate Level CDC Analysis

Siemens Offers Insights into Gate Level CDC Analysis
by Tom Simon on 06-28-2021 at 10:00 am

CDC Analysis

Glitches on clock domain crossing signals have always been a concern for chip designers. Now with increased requirements for reliability, renewed scrutiny is being given to find ways to identify these problems and fix them. In particular applications such as automotive electronics have given this added effort an impetus. Siemens EDA has learned a lot about CDC analysis through working with customers on numerous challenging projects. They learned that there are more types of CDC paths that should be examined than are traditionally looked at as sources of problems.

In their white paper titled “The Three Witches – Preventing Glitch Nightmares on CDC Paths” the authors Ping Yeung and Sulabh-Kumar Khare share their experience with gate level CDC analysis. The three types of CDC paths they cite as critical to focus on are unsynchronized CDC paths, combinational CDC paths and, of course, data multiplexing CDC paths. Their paper focuses on gate level analysis because synthesis can take a seemingly clean design and introduce CDC issues during optimization steps.

Different types of CDC paths need to be properly identified so that the appropriate analysis methods are used on them to look for issues. They also point out that gate level analysis can require long runtimes. This calls for methods to improve efficiency by adding the ability to make refinement and continue execution instead of starting over each time. The Siemens paper also looks at methods to add parallelism to CDC analysis runs to improve throughput.

Questa CDC Analysis

Siemens describes a methodology that combines structural CDC analysis, expression analysis and formal methods to identify and prove real glitches at the gate level. The complete methodology is done in three stages. It starts with gate level set up. The RTL constraints are needed to obtain accurate results. Siemens’ Questa Signoff CDC has the ability to track naming transformations so that RTL constraints be used for gate level analysis. Starting with the validated RTL constraints means there is less room for error. In the case where more information is needed, additional RTL to gate name mapping can be added. Also, Questa Signoff CDC can infer constraints for added logic, such as scan, etc.

The second stage identifies the safe, unsafe and waived CDC paths. Naturally the safe paths are those that contain no combinational logic. The unsafe paths will contain a mixture of the “Three Witches” that the paper’s title refers to. Waivers from the RTL CDC analysis can be used to reduce the workload at the gate level.

Finally, the third stage involves gate level glitch analysis. This is a comprehensive expression analysis of the combinational logic tree in the CDC path that identifies potential glitch candidates. The list of glitch candidates can be pruned with this information. A formal engine is then used to look for scenarios where glitches can propagate. Questa Signoff CDC performs analysis that identifies the exact location at which a signal and its complementary term will con­verge. The result of this analysis gives the designer essential information to understand the exact scenario where a glitch can cause a failure.

Because the potential glitch paths are relatively independent of each other, it is possible to partition the analysis for parallel processing. Using a large server farm, it is possible to concurrently process thousands (or more) of CDC paths. With this approach it is possible to achieve an overnight run for even extremely large designs.

Because the Three Witches of CDCs can curse a design, it’s comforting to know that there are approaches to smoke them out. Siemens benefits from their ability to work with customers on large projects to learn best practices which can then be supported in their tools and shared with designers. The full white paper is available on the Siemens EDA website.

Also Read:

RealTime Digital DRC Can Save Time Close to Tapeout

From Silicon To Systems

Heterogeneous Chiplets Design and Integration


Keynote from Air Force Research Laboratory at CadenceLIVE Americas 2021

Keynote from Air Force Research Laboratory at CadenceLIVE Americas 2021
by Kalar Rajendiran on 06-28-2021 at 6:00 am

Cadence Live Americas 2021

Cadence hosted its annual CadenceLIVE Americas conference June 8th-June 9th. Four keynotes and eighty-three different talks on various topics were presented. The talks were delivered by Cadence, its customers and partners.

The C-suite keynotes were delivered by Lip-Bu Tan (CEO) and Dr. Anirudh Devgan (President). The talks provided insights into Cadence’s strategy, direction, technology and latest transformational products. The two guest keynotes were from Google and Air Force Research Laboratory (AFRL). Earlier blogs covered the Cadence keynotes and Google keynote.

AFRL keynote at CadenceLIVE? The relevance may not be immediately relatable unless one is knowledgeable in the history of the Air Force, computing and the semiconductor industry. With his talk titled, “AFRL Microelectronics Perspectives”, Dr. Yadunath Zambre, chief microelectronics technology officer at the AFRL, connected the dots. This keynote, delivered back-to-back with Devgan’s keynote, highlighted the mutual dependence and reliance between the semiconductor industry and the military.

The Air Force jumpstarted the revolution in computing roughly seven decades ago. And along with other arms of the military, it contributed to significant growth of the semiconductor market. The procurement level through the 1990s was significant enough for semiconductor companies to run dedicated fabs to support military requirements. It is not that all of a sudden semiconductor needs of the military evaporated. The demand has grown over time but the markets for commercial applications started outweighing the demand for military applications. And the disaggregation and offshoring of the semiconductor supply chain have added more challenges for the military from design, manufacturing and procurement perspectives.

Dr. Zambre, through his keynote address, walks the audience through the AFRL’s needs, challenges, procurement objectives and the solutions approach it utilizes to meet their electronics product needs.

AFRL’s Needs

With the Department of Defense (DoD) having to deal with maritime, land, air, cyberspace and space domains, it runs multi-domain operations and uses various products to deploy a range of very different capabilities. This calls for different materials, processes, process geometries, substrates and packaging. In turn, this involves not just digital electronics but also analog, RF, high voltage/current switching, biological sensors, etc. and heterogenous integration and packaging, but in essence, it involves multi-physics modeling and simulation for fluid dynamics, thermal, structural and mechanical properties.

AFRL’s Challenges

The commercial market demands on the semiconductor supply chain dwarfs market demand from the DoD. With not many, if any, incentives for suppliers to support DoD production needs, availability and access to manufacturing capacity is low. Adapting or extending commercially-available products to meet the DoD’s unique requirements introduces technical and cost hurdles. And with the supply chain based in regions with adversary influence, security and trust is a big concern.

Procurement Objectives

With the average life of different products in the 10- to 30-year range and a viable supply chain that is outsourced and offshored, the following are some key goals:

  • Capping the lifecycle cost
  • Improving cost and schedule predictability
  • Access to a reliable and secure ecosystem
  • Better visibility into suppliers

Solutions Approach

Many of the platforms and technologies that Cadence’s Devgan highlighted during his keynote address the AFRL’s needs and challenges. And the aerospace and defense industry is just one of the many markets these platforms support. With that kind of commonality, it makes it easier for the supply chain to cost-effectively support the DoD’s needs without having to develop very different solutions.

Digital twins, virtual prototyping and shift-left approach methodologies help reduce cost per year. They help demonstrate proof of concept and develop requirements and cost estimates without having to fully develop all of the hardware. The Air Force already has many years of experience leveraging these methodologies in building aircraft and rocket designs. EDA tools have now gotten to a level that enables doing designs with just one spin. Emulation with digital twins makes it easier to develop plug-and-play replacements for form, fit and function and help extend life of products. This approach caps lifetime cost and reduces average cost-per-year compared to the historical approach of block upgrades every five years.

Summary

Dr. Zambre’s talk not only gave the audience a nice historical review of the Air Force and the aerospace and defense journey as it relates to electronics and semiconductors, but it also showed how the technology needs and solutions have converged between the commercial and aerospace and defense sectors. The products the AFRL produces are different than commercial and industrial market products, but the semiconductor technology platforms and tools to design, test and produce are really not that different in the end.

 


COVID Recovery Revalues Vision Data

COVID Recovery Revalues Vision Data
by Roger C. Lanctot on 06-27-2021 at 10:00 am

COVID Recovery Revalues Vision Data

As the U.S. and global economies emerge from COVID-19 lockdowns the enduring impact on transportation is still unfolding. Ride hail drivers are returning. Car sharing is surging. Autonomous vehicle testing is reviving. Commuters are commuting. And pedestrians are multiplying.

As people and vehicles return to highways and bi-ways, cities and towns are coming to grips with rising vehicle-related fatality rates just as drivers are coming to grips with a transformed streetscape. Precisely at the moment that people and goods delivery have become increasingly important, available pickup and drop-off points are increasingly blocked by roadworks or restaurant-related outdoor seating.

All of this is happening on the eve of the U.S. approving historic infrastructure funding likely to touch off unprecedented traffic disruptions.

Now, more than ever, drivers and municipal authorities are looking for more reliable and complete data on available traffic through points in real time as well as retrospectively. This is where Nexar comes in.

This week, Nexar announced a relationship with Blyncsy to provide the company’s Payver service with crowdsourced imagery of pavement markings. Blyncsy’s Payver provides up-to-date information on highways and roads by using real-time images and detections collected from the hundreds of thousands of Nexar dash cams currently driving U.S. roads and applying Blyncsy’s proprietary machine learning models to understand the changing road conditions and visibility of pavement markings.

Nexar’s dash cams are also gathering data on the exact location of every stop sign, traffic light, lane line, curb, and parking space. And that is precisely how Nexar’s data is being used, after appropriate anonymization, by transit authorities, municipalities, and autonomous vehicle developers.

The various scenarios were detailed in a recent CoMotion Webinar and included identifying and locating:

  • Work zones
  • Abandoned work zone traffic diversion materials
  • Blocked streets
  • Streets with reversed lane traffic
  • Unprotected traffic guiding personnel
  • Lanes blocked for restaurant outdoor dining
  • Free parking spaces
  • Improper traffic cone use or placement

An overview of Nexar’s available dash cam imagery leaves little doubt that both human and “robot” drivers face an array of formidable challenges in the current post-COVID traffic environment. It is perhaps no surprise that traffic fatalities are up in the U.S. and elsewhere and industry experts and autonomous vehicle operators are forecasting a much longer struggle to achieve fully autonomous operation than previously thought.

Nexar supports multiple autonomous vehicle training and development efforts with its imagery resources representing upwards of 100M miles/month of data. In our brave, new post-COVID world, the variety and volume of vehicles, pedestrians, and roadway obstacles and circumstances is more confusing than ever. It’s a good time to be gathering and sharing this data.


A Free RISC-V CPU Core Builder – Democratizing CPUs

A Free RISC-V CPU Core Builder – Democratizing CPUs
by Steve Hoover on 06-27-2021 at 6:00 am

warp v.org

There are now over a hundred RISC-V CPU cores listed on riscv.org‘s RISC-V Exchange! Amazing. If you need a RISC-V CPU core, you’ll likely be able to find one that suits your needs… if you evaluate a hundred CPU cores to find it.

Or, now, you can configure exactly the core you need, and have it built in seconds, for free! WARP-V is the most flexible RISC-V CPU core available, and recently, Indiana University student, Adam Ratzman, created an online configurator for WARP-V. If you need a low-to-mid-range CPU core, check out Adam’s work at warp-v.org.

I spent most of my career designing high-end CPUs. I worked on CPUs that were more complex than we are likely to ever see again. CPUs went through this crazy cycle of escalating complexity in the race for single-core performance, followed later by the need to simplify.

Technology trends played a funny trick on us. In the nineties, as Moore’s Law gave us more silicon to play with, we gobbled it up to implement the next wiz-bang speculation trick to get a 1% edge in single-stream performance. But now tricks like that work against us in so many ways. They mean big cores, which mean longer wires with RC delays, which, in a modern process, dominate the cycle time and decrease performance. They mean more power, which we must trade-off for performance. They eat up space, which means fewer cores, which decreases performance. They increase design effort, which means it takes longer to optimize for the next process node, which hurts performance.

All this is to say that CPU design no longer requires 500-person design teams. I developed the initial WARP-V core in 2018 in a week and a half. It contains none of the advanced CPU microarchitectural techniques I’ve learned and developed throughout my career. And this might be exactly what you want. On the other hand, what is unique about WARP-V is that it is flexible. You’ll be able to optimize it for your own needs relatively quickly, and today, this is how we get performance.

The secret to WARP-V’s flexibility, and the focus of my startup, Redwood EDA, is Transaction-Level Verilog (TL-Verilog). TL-Verilog gives WARP-V its ability to provide, from the same source code, a single-cycle CPU, a seven-cycle CPU, or anywhere in between. It provides the ability to connect any configuration of WARP-V to a third-party RISC-V formal checker using the same single page of code. It helps to decouple the ISA from the microarchitecture, so WARP-V can support MIPS and other ISAs in addition to RISC-V. You just can’t get this flexibility from RTL, and this flexibility is the key to successful silicon.

WARP-V may not be everyone’s best option today. It has, thus far, been a small-scale effort, and is currently just the CPU core, not bundled with peripherals. But it should serve a good portion of the community quite well as it stands, and it shows a way forward to democratize CPUs without the need for a hundred different independent cores.

RISC-V has liberated the ISA. Now it’s time to liberate CPUs and other components. Save the patents for bigger things.


Podcast EP26: The Challenges and Opportunities of IP Reuse

Podcast EP26: The Challenges and Opportunities of IP Reuse
by Daniel Nenni on 06-25-2021 at 10:00 am

Dan and Mike are joined by Simon Rance, head of marketing for Cliosoft. Simon discusses a broad range of topics associated with IP reuse, from the IP provider and IP consumer point of view. Design data management, as well as IP technical capabilities, license tracking and the benefits of a knowledge base and more are reviewed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.