RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The History and Significance of Power Optimization, According to Jim Hogan

The History and Significance of Power Optimization, According to Jim Hogan
by Mike Gianfagna on 11-13-2020 at 10:00 am

Jim Hogan
Jim Hogan

Power seems to be on everyone’s mind these days. Hyperscale data centers worry about operating costs unless power is optimized. The AI accelerators in the Edge can’t be effective without optimized power. Advanced 2.5 and 3D packages simply can’t remove the heat unless power is optimized.  And then there’s all those gadgets we carry and wear. Power optimization allows us to go a full day without plugging them in. There are plenty of other scenarios, but you get the idea. This is a big topic, with implications from chip design to system design. To get a comprehensive view on the topic, I tapped industry luminary Jim Hogan. Having worked in semis, EDA and system companies, Jim brings all perspectives to the table. I wanted to see what Jim thought about the history and significance of power optimization.

We started at the beginning. When did Jim first realize power was important to optimize? Jim explained that heat was the enemy in many early designs and that led to a focus on power optimization. All this started in the early 1990’s according to Jim. His experience is largely at Cadence, but he also gave a nod to Apache Design Solutions (now part of ANSYS), which came along a bit later and did a lot to focus and define the transistor level power optimization market. Speaking of Apache, Jim pointed out their favorable exit with ANSYS, but cautioned this is not an easy market – it took Apache over a decade to achieve that result. Good things seldom happen overnight.

Jim credits the Apache acquisition by ANSYS for creating an inflection in the market. Apache did a lot to define the market and the ANSYS acquisition created interest from the larger EDA players. It also brought ANSYS closer to the EDA market. With this backdrop, Jim began discussing power in a broader sense with Dr. Vojin Zivojnovic, AGGIOS founder and CEO. During this time, there was a lot of focus on transistor-level power optimization with other physical design and process scaling efforts.

Vojin had been brainstorming a different approach with some key technical collaborators. One that focused on software and its impact on power. Jim recalls a discussion during DAC 2010 in Anaheim, CA with Vojin. The simple observation made during that discussion was that software only consumes power when its running. What could be done to exploit that to reduce power and heat? Vojin and Jim agreed there was a significant business and technical opportunity in all this. And so AGGIOS was born.

I explored software vs. hardware power optimization with Jim. Were they two different ways to solve the same problem, and if so, which was better? Jim felt the two approaches were different and came at the problem with a different set of goals that would yield a different set of results. From a historical perspective, there was an important change in system design philosophy that began around 2005. Up to that point, processor architectures were defined to serve a market and the specs for those processors were then given to the software team, who built the most efficient application they could based on the hardware constraints they were given.

Jim explained a new model began emerging around 2005 that started with cell phones. Now, the software and user experience were the driving specification and the processor was built to support that specification. Essentially, the handoff arrows reversed in the product development lifecycle. If software now defined the hardware, there is opportunity for software to also define power consumption for that hardware. This new design paradigm further fueled the focus and innovation occurring at AGGIOS.

Now, there was an opportunity to quantify power savings based on software optimization. Savings in the double-digit percentages became possible and this provided the opportunity for more innovation and optimization. You get what you measure, so to speak.  Against this backdrop AGGIOS and its software defined energy management approach began to grow. I poked at competition a bit with Jim. AGGIOS appears to be singular in its focus on software defined energy management, why was that?

Jim pointed out that markets don’t occur overnight, they take time to grow. He recalled the decade or more that was required for Apache to create a transistor-level power optimization market. Today, there are many drivers for power optimization and energy management. The aerospace and defense sector has a significant focus on the problem. So does industrial test and measurement. Hand-held devices are everywhere, and battery life is a key usability item. And of course, there is constant focus on power reduction in the data center and the edge. AGGIOS is simply ahead of the curve.

There were more stories and more insights in my conversation with Jim. Perhaps the topic of a future posts. After my conversation, I had a better perspective on the history and significance of power optimization.

Also Read

The Gold Standard for Electromagnetic Analysis

Executive Interview: Vic Kulkarni of ANSYS

World’s Leading Chip Designers at IDEAS Digital Forum Show How to Streamline Design Flows and Reduce Design Cost


CEO Interview: Dr. Chouki Aktouf of Defacto

CEO Interview: Dr. Chouki Aktouf of Defacto
by Daniel Nenni on 11-13-2020 at 6:00 am

Defacto CEO Interview Chouki Aktouf

“For more than 18 years, we never stopped innovating at Defacto. We are aware of EDA Mantra “Innovate or Die!”. Innovation is in our DNA, and we never stopped adding new automated capabilities to the SoC design community to help facing complexity and cost challenges, which increase every year.”

Before founding Defacto in 2003, Dr. Chouki Aktouf was an associate professor of Computer Science at the University of Grenoble – France, and the dependability research group leader. He holds a Ph.D. in Electric Engineering from Grenoble University

We heard a long time ago about the Defacto RTL DFT solution and, since our last interview, Defacto’s SoC Integration solutions at RTL seem to create a lot of Buzz within the designers’ community. Would you please tell us more?

Yes, we started as a DFT tool provider more than 18 years ago currently main offer is in the area of SoC integration at RTL, where we help our customers build a unified design flow to start the design assembly and SoC integration process pre-synthesis covering RTL but not only by including other design collaterals such as UPF, SDC, and IPXACT.

Beyond the automation we provide, we enable a high degree of customization, including rich APIs beyond Tcl, like Python and Java.

In summary, Defacto simplifies the implementation process pre-synthesis by taking care of the RTL, including system Verilog and all the design data correlated to the RTL. A typical example is the Accellera standard IPXACT used to describe, integrate, and share IP and sub-systems. Defacto now has a complete offer where IPXACT and RTL are tight to each other in a one and a unified SoC implementation platform.

 

You talk about IPXACT for integration and, we heard about the Magillem acquisition. Is it a good opportunity for you to help customers in this area?

Customers are telling us that they don’t want IPXACT to be decorrelated to the RTL, and they are pushing us to cover the IPXACT related needs not only in terms of coherency checks and view generation but, more importantly, during the RTL SoC  integration process. The good news is that we are ready to answer the IPXACT Design assembly and SoC Integration needs. We already count daily users for real projects. In summary, we are helping both new IPXACT adopters who need a smooth, reliable migration in full compliance with RTL design flows and companies who already consider IPXACT as a golden format. In other words, we believe we are covering current market needs, and we are even anticipating the future. Indeed, being an active Accellera member, we capture customer needs beyond current IPXACT 2009 and 2014 standards.

 

You talk about UPF and SDC. How do you manage them during the integration process? Do you update them as well?

SDC and UPF are necessary to design collaterals that are key during the SoC implementation process. Managing RTL alone is not enough. Updating UPF and SDC files when RTL changes is a painful design task that is traditionally manual.  Defacto provides automation to cover checking between RTL and SDC/UPF but also updating UPF and SDC along with RTL changes and many other related automated capabilities.

 

Your platform seems to be a complete solution for SoC Integration, covering all the needs at Front-end. Who is the typical customer of Defacto?

Most SoC companies and major IP providers build SoC chips for their customers who need to bring more automation into their design flows. Today we are in a situation where most SoC companies buy IPs from 3rd parties, build their own IPs, reuse part of previous projects and nothing is packaged the same way.  It becomes a mess when it’s time to realize the SoC Assembly. Using the flexibility of our platform, designers can start building their SoC faster and earlier.

We are proud to count as customers major semiconductor companies leading the chip market of communication, processors, AI, IoT, etc.

What makes customers go with your solution, what do they find in your company that doesn’t exist in the other EDA vendors like the majors?

Our solution has reached a maturity level that makes our customers more confident to manage aggressive PPA requirements when using Defacto. We already contributed in silicon success for so many different projects including leading edge technologies. We are not competing with Majors EDA companies. We strongly believe we are complementary to their EDA tools in the area of the SoC integration and design assembly. Also since Defacto’s STAR is more a platform than a point tool, a user finds several customization ways of the used features including the API languages. In summary our customers are telling us they like the flexibility and the ease use of our tools. Even the migration of their internal scripts, it is straightforward when switching to Defacto’STAR.

 

What are the plans for Defacto in the coming years?

Defacto is growing and will continue to grow. We are present in almost all territories with local technical expertise. In the coming months, we are preparing new announcements of our STAR Platform with new capabilities that will continue putting STAR much ahead as the “De facto” SoC Integration and Assembly solution pre synthesis.  Finally, when using Defacto our customers know they will get best in class combination: support and product quality.  To protect this “support & quality” brand, we will not stop improving daily our support by making our reactivity even better and also the quality of our tools  by strengthening our internal QA process.

Also Read:

CEO Interview: Andreas Kuehlmann of Tortuga Logic

CEO Interview: Paul Wells of sureCore

CEO Interview: Wally Rhines of Cornami


Mentor Offers Next Generation DFT with Streaming Scan Network

Mentor Offers Next Generation DFT with Streaming Scan Network
by Tom Simon on 11-12-2020 at 10:00 am

Streaming Scan Network

Design for test (DFT) requires a lot of up-front planning that can be difficult to alter if testing needs or performance differ from initial expectations. Hierarchical methodologies help in many ways including making it easier to reduce on chip resources such as the number of test signals. Also, hierarchical test allows for speed-ups from parallel testing of separate blocks. MUX’ing of test signals helps deal with limited test pins on SOCs, and also reduces wiring overhead. Yet with each of these advantages comes the need to plan accordingly. Mentor has just announced what they call Streaming Scan Network (SSN) in their Tessent product line that promises to offer increased flexibility and performance. Tessent’s SSN offers support for implementation of a specialized network for carrying test data within an SOC.

Streaming Scan Network

While at glance it might seem that a Network On Chip (NOC) similar to those used for block level data buses would suffice, however Mentor has developed a highly optimized solution that better meets the needs of DFT in several clever ways. The Streaming Scan Network bus width is independent of the number of the block level scan pins. For instance, a 4 bit wide SSN bus can carry 7 bits for one block and 5 for another, interleaving the bits on the SSN bus. In fact, the SSN bus could even be 1-bit wide if needed and still be used for cores with any number of test pins. There is a controller at each core node that manages the incoming and outgoing test data and transfers it to and from the block’s scan pins. The controllers have their own command line that directs their activity. Because once streaming starts there is no header information required, the SSN bus is carrying 100% payload. This is why for test SSN is a better solution than traditional NOCs.

The grouping of cores for parallel testing can be easily reconfigured because the targeting of test data is handled by the node controllers. So, if in the late stages of product development, a specific block’s test vector size increases, no hardware changes are needed. Also scan and capture can now be handled flexibly and do not have to align with other blocks being tested at the same time. If there are blocks that require larger vectors, they can get priority, so their tests can complete sooner. This makes it possible to optimize and save tester time.

So, what are the results of using the Streaming Scan Network? Mentor worked extensively with Intel prior to the announcement. In fact, Intel published a paper at the 2020 ITC that goes into the details of the improvements they observed. Intel reports that they found that SSN reduced test data volumes by 36 to 43%, reduced test cycles by 16 to 43%. They were able to run the steps in the design and retargeting flow between 10-20X faster when compared to their previous methodology.

Mentor claims that DFT development time can be cut in half or more with the application of Streaming Scan Networks. Also, because it also supports abutment-based design styles, it is useful in many of the newer tile based designs found in AI and other parallel processing applications.

Mentor has shown a consistent history of aggressive technology development in test. The recent advances in their Tessent product are impressive. Although, in hindsight it seems inevitable that a packet based methodology would be advantageous for DFT, Mentor has taken the time to develop a well thought out approach that is well suited for test in particular. More information about Mentor’s Streaming Scan Network in Tessent can be found on the Mentor website.

Also Read:

Mentor User2User Virtual Event 2020!

ASIC and FPGA Design and Verification Trends 2020

Siemens is the True Catalyst for Secure and Trusted Digital Transformation


Agile and DevOps for Hardware. Keynotes at DVCon Europe

Agile and DevOps for Hardware. Keynotes at DVCon Europe
by Bernard Murphy on 11-12-2020 at 6:00 am

Agile and DevOps for Hardware

Paul Cunningham (Verification CVP/GM at Cadence) initiated our monthly Innovation in Verification blog to hunt for novel ideas in verification, breaking past the usual steady, necessary but undramatic pace of incremental advances. I attended a couple of sessions from DVCon Europe recently, and was encouraged to hear a couple of talks with a similar mindset. The opening keynote was delivered by Moshe Zalcberg, CEO of Veriest on what ideas hardware design and verification might borrow from software. Moshe covered a lot of territory – open source design and tooling, use of Python, Agile, more effective use of data and AI. I’d like to look here at the topic of Agile/DevOps for hardware. Particularly since Vicki Mitchell, an engineering VP at Arm, followed with a later keynote on how she is applying these today in Arm.

Waterfall versus Agile

I’d better start with a little explanation, following Moshe’s talk. Consider traditional waterfall development, the approach most of us use in design and verification today. From requirements gathering to design, implementation and verification, and ultimately to delivery, in that sequence. Here, the product is not really usable until near the end. Agile methods aim to improve on waterfall approaches though continuous delivery of value, delivering working code frequently, and maintaining a constant pace of delivery. Developers build code in short cycles called sprints, available at multiple points through the complete cycle. Shift-left is a compressed waterfall. In contrast, Agile breaks up the goal by code features and aims to complete a group of features as well as possible on each sprint drop. If it’s for a testbench for example, each sprint should deliver a working testbench (or family of testbenches) for some set of features.

How Agile helps

These practices have already become common in software development. Software team leaders assert that an agile flow provides higher quality results because developers have to fully test what they build in the current sprint. It also provides better schedule predictability and difficult problems are surfaced more quickly for resolution. Obviously you have to embed testing tightly in development in this approach – unit testing, coding standards, static analysis and so on. There are tons of unit testing frameworks in the software world to help automate this task.

Application in Arm

In hardware design and especially verification, Moshe admits we are in early days of adoption of such practices. However it is starting to happen, notably in Arm in the systems group. Vicki Mitchell, VP of central engineering for that group gave a second keynote on how they’re using DevOps in that role, also in supporting customers through reference design system verification for example. She brings to this role a lot of background in running software engineering organizations at other companies such Intel.

Vicki mentioned, incidentally, a key motivation for Agile approaches – lack of clear requirements and user feedback. Customers are figuring out on-the-fly what they need, and competitors aren’t standing still. That creates much more churn in development and a greater need for agility. Which leads to a need in Arm’s eyes to make agility actionable. Vicki talked particularly about DevOps rather than Agile. These two processes look very similar to a simpleton like me. My takeaway is that DevOps has an in-house focus (development + operations, aka build, regression, delivery etc). It aims for very quick feedback and it has a big focus on automation.

Arm DevOps automation

What I found particularly interesting in Vicki’s talk was Arm’s implementation and learning for continuous integration. Their gatekeeper flow monitors as you check in a change with integration tests. They use change-set checks to determine which tests should be run and will up-vote or down-vote your submission on each test. Test sets bloat fairly quick in this automation. They apply machine learning to periodically cull down to an optimized set. They’ve also developed tools to automate building integration tests. She summed up by noting that they’ve been able to improve scheduling and provide more frequent deliveries to stakeholders. The CPU team at Arm (Austin I think) is now piloting a similar program.

Interesting insights. You can learn a lot more from the talks themselves. These are still available as recordings from DVCon Europe, through November 23rdThis is the Moshe keynote, and this is the Vicki keynote.

Also Read:

Israel and Automotive Safety. More Active Than You May Think.

Veriest Meetup Provides Insights on Safety, Deadlocks

Online Verification Meet-up With Intel and Arm!


SiFive Expands RISC-V Technology and its Ecosystem at the Fall Linley Processor Conference

SiFive Expands RISC-V Technology and its Ecosystem at the Fall Linley Processor Conference
by Mike Gianfagna on 11-11-2020 at 10:00 am

SiFive Expands RISC V Technology and its Ecosystem at the Fall Linley Processor Conference

 

As the Linley Fall Processor Conference winds down, there are certain presenting companies that left a lasting impression.  SiFive is one of those companies. On October 21, SiFive introduced the newest member of the SiFive Intelligence family of processor coresSiFive Intelligence family of processor cores, based on the RISC-V ISA and the RISC-V Vector (RVV) extension. And then on October 29 they presented details of the highly anticipated RISC-V PC ecosystem. So, SiFive was busy expanding RISC-V technology and its ecosystem. Here are some details of both presentations.

Extending AI SoC Design Possibilities Through Linux-Capable Vector Processors

This presentation was given by Krste Asanović, SiFive Chief Architect & Co-Founder and Co-Inventor of RISC-V. Krste began with a review of the challenges associated with AI SoC design. These include:

  • Multiple bandwidth-hungry subsystems
  • Multiple proprietary instruction sets for deep learning accelerators
  • Poor memory-bandwidth utilization
  • Complex memory crossbar
  • Power optimization difficult to implement

There are also software challenges for these designs, including:

  • Multiple proprietary accelerator instruction sets
  • Multiple proprietary API’s and outdated libraries
  • New techniques such as deep compression or Winograd transform not supported
  • Memory hierarchy doesn’t match new algorithm requirements
  • Poor compiler support, requires programming at a low level
  • Power optimization difficult to implement

Krste then presented the newest member of the SiFive Intelligence family of processor cores to address these challenges. The family is based on an open industry standard ISA (RVV v1.0) to prevent vendor lock-in and enable rich ecosystem for AI. Krste reported this is the first commercially available processor core IP based on the expected final RVV 1.0 specification.

He explained these cores deliver scalable performance to meet AI processing requirements from extremely low power to high performance compute applications. The multi-core architecture can integrate Linux capable or real-time cores with accelerators to provide performance scaling.  It also provides efficient memory hierarchy that maximizes data reuse. The single ISA enables a simple and efficient programming model that allows tuning algorithms for both performance and low power.

Security is always a discussion point for these applications, and SiFive provides comprehensive security support enabled by SiFive WorldGuard, a capability of SiFive Shield to provide true hardware isolation for whole SoC security while enabling software portability. You can see a video overview of SiFive WorldGuard here. There is also an advanced trace and debug solution, making the Intelligence family quite robust.  Krste shared the roadmap for this technology, shown below.

SiFive Intelligence Roadmap

Creating a RISC-V PC Ecosystem for Linux Application Development

This presentation was given by Dr. Yunsup Lee, SiFive CTO & Co-Founder and Co-Inventor of RISC-V. Yunsup detailed what embedded developers need, which includes:

  • Industry Standard Form Factor
  • Advanced Features
  • Linux-Capable Development Platform
  • Out-of-the Box Software
  • IP Evaluation
  • Expansion

Yunsup explained that SiFive delivers these capabilities with its HiFive Unmatched, a development board for a Linux-based PC that uses its RISC-V processors. A photo of the development board and a summary of key features is shown below.

SiFive HiFive Unmatched

Yunsup detailed some of the capabilities of the SiFive FU740 SoC on this board. These include:

  • SiFive 7-Series Multi-Core Application Processor
    • 64-Bit 8-Stage Dual-Issue, Superscalar RISC-V Core
  • Application Core Complex
    • 4x SiFive U74 Cores
    • RV64GC (RV64IMAFDC)
    • 32KB I$ Per Core
    • 32KB D$ Per Core
  • Single Embedded S7 Core
    • RV64IMAC
    • 16KB I$
    • 8KB DTIM
  • 2MB Coherent Banked L2$
  • Integrated PCIe® Gen 3, DDR4, & I/O

Yunsup mentioned a short video demonstration of HiFive Unmatched used as a professional developer platform. He explained the video will cover:

  • Native compilation
    • Video application
    • Example benchmark
  • GPU accelerated video playback
  • Web browser functionality

The video was posted to the SiFive YouTube channel after the presentation and can be viewed here.

Also available is the Freedom E SDK, which is a repository of demo programs, industry standard benchmarks and board support packages (BSPs) for SiFive’s hardware platforms. The package is available on GitHub here. Yunsup explained the development board will be available worldwide Q4’20 for $665USD.  So, SiFive was indeed busy expanding RISC-V technology and its ecosystem.


Prototyping with the Latest and Greatest Xilinx FPGAs

Prototyping with the Latest and Greatest Xilinx FPGAs
by Daniel Nenni on 11-11-2020 at 6:00 am

Prototyping with the Latest and Greatest Xilinx FPGAs

I was reading the S2C press release announcing their new FPGA prototyping platform based on the Xilinx UltraScale+ VU19P FPGA, and how the new FPGA will accelerate billion gate FPGA prototyping, and I was struck by the stunning implications of this announcement.  Not that billion gate SoC designs can now be prototyped with FPGAs, the larger FPGA prototyping providers have been talking about this for a while.  I was struck by the trajectory that FPGAs are on to hugely simplify FPGA prototyping.  Twenty years ago, early FPGAs supported about 5K ASIC gates (Xilinx XC30902) and now the VU19P FPGA boasts an estimated 50M ASIC gates!  That’s 10,000 times more ASIC gates from a single FPGA device in 20 years!

Request a Quote

Look, the biggest challenge for FPGA prototyping is getting an SoC design working in FPGAs fast.  Not only to minimize the set-up effort of just one of the verification tools in the verification toolbox, but also to minimize the risk that the FPGA prototype never produces the expected pre-silicon verification ROI.  Generally, bigger FPGAs reduces the number of FPGA devices needed to prototype an SoC design.  The previous largest Xilinx FPGA, the UltraScale VU440, has an estimated capacity today of about 30M ASIC gates and it was announced in 2015.

If the new UltraScale+ VU19P delivers the expected 49M ASIC gate capacity, that’s 1.7 times more ASIC prototyping gates from a single FPGA in 5 years, and if the semiconductor industry is true to form, it’s not unreasonable to expect the FPGA gate capacity growth to be non-linear.  So, simply using the same growth factor, it’s easy to project 80M ASIC gate FPGAs is less than 5 years, and 140M ASIC gate FPGAs is less than 10 years.

You can see where this thinking is going – we might be able to prototype a billion gate SoC design with 5 or 7 FPGAs in less than 10 years!  Then, the job of getting an SoC design into an FPGA prototype will be super bigly simplified – or maybe just routine.  The task of partitioning billion gate designs into multiple FPGAs gets much easier.

The prototype performance gets better because most of the interconnect would be contained within an FPGA.  And, the cost of ownership should decline to the point where FPGA prototyping for large SoC designs approaches the pervasiveness that we see today for smaller SoC designs that fit into one or a few FPGAs.

If you are a skeptic, and are doubting that the FPGA companies can deliver on this aggressive capacity growth curve, take a look at the highly advanced packaging technology that Intel and Xilinx are using to produce their largest FPGAs today.  The trending approach is to use logic fabric “chiplets” to increase yield on advanced silicon nodes and to reduce cost.

Combine this with 3D silicon interconnect, and heterogeneous die in the same package, and voila! – they could continue down this path all day long, possibly with a faster ASIC gate capacity growth factor than the last 5 years.  As always, I’m betting on technology and a stellar future for FPGA prototyping.

Xilinx Delivers the Industry’s First 4M Logic Cell Device, Offering >50M Equivalent ASIC Gates and 4X More Capacity than Competitive Alternatives

Xilinx XC3000 FPGA Product Description

S2C Accelerates Billion Gate FPGA Prototyping with Xilinx Virtex UltraScale+ VU19P Based Systems

About S2C

S2C, is a global leader of FPGA prototyping solutions for today’s innovative SoC/ASIC designs. S2C has been successfully delivering rapid SoC prototyping solutions since 2003. With over 500 customers and more than 3,000 systems installed, our highly qualified engineering team and customer-centric sales team understands our users’ SoC development needs. S2C has offices and sales representatives in the US, Europe, Israel, China, Korea, Japan, and Taiwan. For more information please visit www.s2cinc.com.

Also Read:

S2C Announces 300 Million Gate Prototyping System with Intel® Stratix® 10 GX 10M FPGAs

Webinar: Hyperscale SoC Validation with Cloud-based Hardware Simulation Framework

WEBINAR: Prototyping With Intel’s New 80M Gate FPGA


Aldec Adds Simulation Acceleration for Microchip FPGAs

Aldec Adds Simulation Acceleration for Microchip FPGAs
by Tom Simon on 11-10-2020 at 10:00 am

Simulation Acceleration

Despite the fact that FPGA based systems make it easy to add ‘hardware in the loop’ for verification, the benefits of HDL and gate level simulation are critical for finding and eliminating issues and bugs. The problem is that software simulators can require enormous amounts of time to run full simulations over sufficient time intervals to locate and eliminate problems. This is where HDL simulation acceleration can help close the gap and improve productivity. Aldec has a white paper titled “HDL Simulation Acceleration Solution for Microchip FPGA Designs” that discusses the topic in detail and provides insight into how designers can gain the benefits of speedup from hybrid software and hardware based simulation acceleration to rapidly identify and resolve issues.

Simulation Acceleration

FPGA based systems are heavily used in aerospace, aviation, and automotive markets. Some of these markets have very specific requirements for reliability and radiation tolerance (RT). FPGAs, such as those from Microchip offer excellent solutions for these markets. Microchip’s PolarFire FPGA offers RT and their SmartFusion2 comes with an embedded ARM Cortex-M3.

The Aldec paper covers each of the verification processes that must be addressed. There is RTL simulation with all the necessary test benches. Then comes post-synthesis simulation, which comes with much more simulation overhead. This is also when any potential discrepancies between gate level and RTL results are examined. Also, IP cores which should be independently verified need to be simulated in-system to ensure proper integration. As always there are regression tests that must be performed throughout the project lifetime. On top of this there is the use of constrained random testing to catch difficult to find corner cases. Constrained random testing usually needs massive amounts of simulation. Lastly, any debugging requires problem identification, fix implementation and verification which calls for the features found in HDL simulation.

In the white paper Aldec describes their solution for hardware acceleration of simulation of FPGA based system – HES-DVM. It offers simulation acceleration, emulation and physical prototyping. In the case of special purpose applications like RT or where there is vendor specific IP, they offer the ability to use the target FPGA for simulation. This means that IP can run natively using specialized features of the FPGA during simulation. The test benches and any HDL needed for debugging run on the Aldec Riviera-PRO or Active-HDL HDL simulators which are tightly integrated or with other simulators using PLI or VHPI interfaces. This approach requires no changes to testbenches because the DUT-wrapper handles the connection between the HDL simulator and the simulation running on the HES board.

According to Aldec, the key features of their hybrid simulation solution are as follows. First off there is automated design setup with their Design Verification Manager (DVM). Next is HDL compilation of VHDL, Verilog or SystemVerilog to elaborate the design. Even mixed HDL designs are supported. Incremental synthesis follows converting the HDL into high level net lists. This approach to synthesis means that individual blocks can be resynthesized without any need to resynthesize the full design. The granularity can be controlled by creating synthesis groups to optimize results. Aldec’s DVM gives users control over debug probes so that they are available after synthesis of HDL. DVM also supports FPGA technology primitives that are instantiated in the HDL/RTL code. The same is true for hard macros of third-party IP cores. Aldec DVM provides an interface to map memory modules into on-chip or on-board memories. Through this it is possible to offer back-door interfaces to read or write memory during runtime. This greatly improves debugging capabilities. DVM lets users specify which signals are of interest, so they are preserved. Also, specific blocks can be flagged so that they run in the simulator and not in the FPGA.

The white paper concludes with an example of a scenario for verifying a radar design. Aldec has a unique and powerful solution for avoiding the bottlenecks and delays of full system HDL simulation. At the same time, it offers the visibility and debugging power found in HDL based simulation. On top of this they have added target FPGA based simulation, so that vendor specific IP is fully supported. The Aldec simulation acceleration solution offers the performance and flexibility needed to sign off complex aerospace, aviation, military and automotive systems. If you are interested the full white paper is available on the Aldec website.

 


WEBINAR: Differentiated Edge AI with OpenFive and CEVA

WEBINAR: Differentiated Edge AI with OpenFive and CEVA
by Bernard Murphy on 11-10-2020 at 6:00 am

Enablin AI Vision at the edge min

OpenFive is hosting a webinar with CEVA on November 12th to talk about how OpenFive’s vision platform, leveraging CEVA vision and AI solutions. Which can get you to a differentiated solution for your product with as much or as little silicon participation on your part as you want. I talked briefly to Jeff VanWashenova (CEVA Sr. Dir of AI and Computer Vision) to get a sense of the opportunity. He sees a lot of interest from system product teams in automotive Tier1s and in other edge applications. These teams want to put differentiated high performance and low power AI at the heart of their edge products. But chip design isn’t their core skill. They need help, with vision, with AI and with a platform and silicon expertise to put the whole thing together. REGISTER HERE to watch the webinar.

OpenFive/CEVA partnership

That’s where the OpenFive and CEVA partnership comes in. CEVA and SiFive (the parent of OpenFive) already have an established partnership to bring AI to mainstream edge markets. CEVA is already well known in intelligent computer vision with their NeuPro architecture for CV, SLAM and wide-angle imaging applications. OpenFive adds to that the SiFive IP platform plus the silicon experience they bring with them in their previous incarnation as OpenSilicon, a full-service ASIC shop.

Between OpenFive and CEVA you have access to a turnkey design solution, all the way through manufacturing, assembly and test. While still being able to enjoy the advantage of RISC-V ISA extensions to optimize performance in the CPU core(s). Plus high performance and low power CV and neural net inferencing. You can develop and optimize your software to a platform customized to your specific needs and optimized for edge constraints.

About OpenFive

OpenFive is a solution-centric silicon company that is uniquely positioned to design processor agnostic SoC architectures. With customizable and differentiated IP for Artificial Intelligence, Edge Computing, HPC, and Networking solutions, OpenFive develops domain-specific SoC architectures based on high-performance, highly efficient, cost-optimized IP to deliver scalable, optimized, differentiated silicon. OpenFive offers end-to-end expertise in Architecture, Design Implementation, Software, Silicon Validation and Manufacturing to deliver high-quality silicon.

About CEVA

CEVA is the leading licensor of wireless connectivity and smart sensing technologies. We offer Digital Signal Processors, AI processors, wireless platforms and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence. All of which are key enabling technologies for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial and IoT.

CEVA ultra-low-power IPs include comprehensive DSP-based platforms for 5G baseband processing in mobile and infrastructure, advanced imaging and computer vision for any camera-enabled device and audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and IMU solutions for AR/VR, robotics, remote controls, and IoT. In artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For wireless IoT, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6 (802.11n/ac/ax) and NB-IoT.

Also Read:

Open-Silicon SiFive and Customizable Configurable IP Subsystems

Ethernet Enhancements Enable Efficiencies

RISC-V End to End Solutions for HPC and Networking


Post Election Fallout-Let the Chips Fall / Rise Where They May

Post Election Fallout-Let the Chips Fall / Rise Where They May
by Robert Maire on 11-09-2020 at 10:00 am

US China Trade War
  • Changing US administration likely positive for chips & tech
  • Stepping back from the brink of a potential ugly trade war
  • Likely increased Covid/tech spend- War with Big Tech is over
  • Tech & Chips had a lot riding on the elections outcome

As compared to previous presidential elections, this one likely has much more impact on potential on the tech industry as a whole and on chips and equipment specifically.

The open trade war between the US and China has seemingly devolved into a tech “cold war” as the US has gone after Huawei which threatened the US dominance of 5G.

The ongoing “tit for tat” has looked like it would break back out into open warfare after the US cut off equipment sales (without a license) to SMIC of China.

It has had the sort of feel of the Cuban missile crisis in which neither side would back down and continued to escalate.

We think the change in management on the US will give China reason and an opening to back off the escalation and stand down and stand by.

The Chinese government is likely hopeful that the incoming administration will be less provocative and aggressive when it comes to China trade.

Trump had something to prove, Biden doesn’t

One of the biggest talking points for Trump in the 2016 election, aside from the wall and immigration, was the imbalance of trade with China (and other countries).

The trade deal with China (which turned out to be ineffective for trade balance..) was initially the focal point and when that didn’t work out as expected the focus shifted to Huawei and 5G as the new proxy for the Chinese trade issue.

Had we not had the distraction of Covid, the trade war would likely have been raging white hot by now. Given its less important status, the trade war has been simmering and stumbling forward.

I am hard pressed to remember anything specific from Biden regarding China trade as the commentary has been more general and vague regarding who would be “tougher on China” rather than specifics on Huawei

Biden in a box

We certainly don’t think that Biden will do a 180 and go soft on China out of the box. Quite the opposite…we think that Biden cannot be seen as “soft” on China and will not significantly back off (at least not publicly).

More importantly, China trade issues are one of the very few issues that has strong bi-partisan support across the entire US. Trying to foster bi-partisan cooperation means keeping up the trade war on China under the new administration to build a cooperative political environment.

Nothing unites people more than a common foe and the US needs a lot of uniting right now.

Two more months and possible “scorched earth”

President Trump is obviously not in a good mood and may choose to lash out just because he can or to prove a point. He also has little concern about the consequences at this point being a “lame duck”.

He could completely ignore Covid, leaving the mess for Biden to clean up and go back and focus on the issues that are near and dear such as China trade (one of his “go to” subjects).

Knowing that China won’t likely react, while waiting for a better Biden, means that he could do virtually anything without fear of repercussion (short of something crazy).

He could also try and prove as much as possible prior to January with an eye on 2024.

The problem is that a little over two months is not a lot of time to get something accomplished but certainly enough time to cause trouble.

China Risk Redux

Our view is that overall, the immediate threat of a trade war with China going “nuclear” is much diminished.

Everyone, both the current and future administration, are preoccupied with other things to worry about or focus on.

This is perhaps more true with the outgoing administration that likely will not get back to strong focus on China with the amount of time remaining on the clock.

Probably not a lot will happen over the remaining couple of months and not a lot will happen in the first few months of the new administration so we could easily a six to nine month reprieve before we would have a better idea of the new administrations stance. Maybe even longer depending upon how long Covid is a problem.

Though the “deep state” people who focus on China trade issues every day likely won’t change significantly, at least not right away, the urgency and focus will likely diminish given different priorities.

Silicon valley (big tech) being blue isn’t so bad

Big tech has been a big target of the current administration for problems both real and imagined. Facebook, Twitter, Google….not so much Tim “Apple” who catered to Trump’s ego. BK of Intel held up a shiny wafer for Trump in the oval office but later quit the tech counsel.

Tech has not been on Trump’s BFF list and has been worsening by the day, especially the last few days post the election.

Biden has not had many if any photo ops with silicon valley types and at the same time does not regularly criticize them so it seems more like a neutral relationship which is likely a strong improvement for most silicon valley execs. Most would likely prefer to get back to flying below the radar, and making money than getting grilled before congress and lambasted by the president.

Its also not like Biden had to kiss butt in the valley as California is as blue as New York so he likely doesn’t owe any favors.

In general, big tech overall can go back to its low key way of doing business and controlling our lives and pocketbooks without a lot of publicity.

Even though Musk seems to have gotten along well with Trump, his company, Tesla, may do better under “greener” pastures of climate change concerns.

Semiconductor Stocks

Have been on fire during Covid and have been running up strong during earnings. The recent merger mania has re-ignited valuations and interest in the group. Covid demand has been strong.

The change in US administration may prove to be a boon for the pending chip M&A transactions.

Our view had been that many if not all of the chip M&A deals would be DOA in China as retribution for the US cutting off SMIC. Indeed the Applied Kokusai deal has been held up forever and on the brink of collapse due to China. Could we see China let the current crop of Chip M&A deals slide as an “olive branch” to the incoming administration?

There are a lot of deals in the hopper, Analog/Maxim, Nvidia/ARM, AMD/Xilinx and who knows whos left or next?

While its unclear if China will extend the olive branch or if its even fast enough to rescue Applied Kokusai we will have to wait to see but at the very least its a very positive step for these companies and deals and will likely grease the skids for other deals that otherwise may not have even been contemplated due to fears of Chinese reaction.

In general the results of the election are positive, both short and longer term, for the semiconductor industry specifically and tech overall as a whole.

We think it reduces the probability of our “doomsday” scenario of a full blown chip and equipment embargo to much more manageable risk to a point where it is much less of a focus for investors in the group.

While reducing the risk it also opens up the possibility of more M&A which will also drive valuations, obviously more so for potential targets.

AMD remains preferable over Intel. Equipment companies will continue to do well and may get more export licenses than previously expected.

All in all a positive election outcome for chips and tech.

Also Read:

Downplaying SMIC – Uplaying TSMC

Coronavirus Remains Good for Semiconductors but not China

Is Intel Losing its Memory?


Achronix is Driving the Fourth FPGA Wave

Achronix is Driving the Fourth FPGA Wave
by Mike Gianfagna on 11-09-2020 at 8:00 am

Achronix and the Fourth FPGA Wave

Technology typically evolves in waves. Sometimes it’s referred to as a “revolution” or an “age”. The industrial revolution and the information age are examples. These kinds of categorizations help to clarify the impact of innovation in ways that are relevant to everyone – you can’t look away if the world is changing around you. So, when I heard Achronix is driving the fourth FPGA wave at the recent Fall Linley Processor Conference, I simply couldn’t look away.

Mike Fitton

The presentation was given by Mike Fitton, senior director of strategy and planning at Achronix. Mike has 25+ years of experience in the signal processing domain, including system architecture, algorithm development, and semiconductors across wireless operators, network infrastructure and most recently in machine learning. He also holds a PhD in mobile telecom. Mike clearly has the experience and pedigree to comment on technology waves.

He began by reviewing the first three waves of FPGAs:

  • First wave, mid 1980’s: Altera and Xilinx create the FPGA market around glue logic and programmable I/Os
  • Second wave, mid 1990’s: Connectivity and switching is added, making FPGAs quite a bit more complex
  • Third wave, 2018: Data acceleration in the cloud is added for applications like machine learning/AI, network acceleration and computational storge. 5G infrastructure and autonomous driving use FPGAs as well. FPGA complexity goes way up

Next, the definition of the fourth wave:

  • Fourth wave, 2020: Ubiquitous computing at the edge. The edge is where much of the data from IoT and 5G networks will be processed. Smart factories, smart cities, fronthaul convergence and sensor fusion are some examples

Mike presented some attributes of this new environment:

  • Developer friendly (software programmable)
  • Lower latency
  • Reduced infrastructure cost
  • Enhanced security
  • COVID-19 is accelerating the transformation

He went on to explain the evolution of FPGA deployment, beginning with discrete, programmable FPGAs for the cloud and evolving to embedded FPGA fabrics for purpose-built SoCs in edge computing. He further pointed out that Achronix is unique in the industry as the only FPGA vendor to offer both discrete and embedded products. The following summarizes the attributes of each deployment strategy.

  • Speedster®7t FPGA family
    • Reprogrammable workload acceleration
    • Off-the-shelf product
    • High speed interfaces
    • High bandwidth memory
    • Low to mid volume applications

 

  • Speedcore™ eFPGA IP family
    • Monolithic or chiplet integration in customer ASIC
    • Mid to high volume applications
    • Lower device cost/power than standalone FPGA
    • Customer defined device resources
    • High bandwidth/low power interconnect

Mike explained that there is essentially a compiler to configure the embedded FPGA block to the precise requirements of the SoC, including memory size, FPGA capacity and custom functions. This approach allows a very cost-effective deployment of FPGA technology since there is no wasted capacity, no additional high-speed I/O and no separate package.

This approach is quite significant in that flexibility, die size and power can all be traded off as required in the product development cycle, thanks to the ability to build essentially custom FPGAs for a specific SoC. Since late changes can be accommodated by the FPGA fabric, tapeout can be accelerated as well. Product lifecycles can also be extended since the embedded programmable fabric can accommodate changing algorithms and changing standards.

These benefits hit home for me. In a prior life, I worked on AI enablement for ASICs. We had a lot of sophisticated, configurable IP, such as transpose memory and various convolution engines. The biggest challenge was keeping the chip from being obsolete when it returned from the fab because of ever-changing algorithms. The fourth wave of FPGAs handles this problem nicely.

With the spectrum of FPGAs offered by Achronix, you can prototype on Speedster7t FPGAs and migrate to Speedcore eFPGA IP to optimize the product. So that’s how Achronix is driving the fourth FPGA wave. You can learn more about the Achronix Speedster7t family of FPGAs here and Speedcore embedded FPGAs here.