RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

ARMing AI/ML

ARMing AI/ML
by Bernard Murphy on 03-24-2017 at 7:00 am

There is huge momentum building behind AI, machine learning (ML) and deep learning; unsurprisingly ARM has been busy preparing their own contribution to this space. They announced this week a new multi-core micro-architecture called DynamIQ, covering all Cortex-A processors, whose purpose is in their words, “to redefine flexible multicore and heterogeneous compute to enhance the experience of diverse, increasingly intelligent devices from chip to cloud”. What looks particularly important about this is support for heterogeneous clusters, including support for CPUs in the cluster which may not be from ARM, also faster links to the accelerators increasingly found in AI applications.

This raises several interesting thoughts. Perhaps one might be how does ARM play in this space – isn’t all this stuff specialized to neural nets? Some of it certainly is, but not all of it. Non-neural ML usually runs on CPUs. And the learning part of deep-learning typically runs on GPUs, where NVIDIA has a strong start, but I wouldn’t be surprised if system design teams are thinking about working with ARM on Mali-based solutions.

Equally, at least to this reviewer, whatever AI/ML solutions you use, these things can’t exist in isolation. Particularly in mobile and IoT applications (and maybe increasingly in HPC), they must drop into an existing infrastructure which already provides must-have functionality in power management, security, wire or wireless communication, embedded software and debug, cloud access, provisioning, etc, etc. In other words, the ARM ecosystem.

Advanced AI capabilities won’t be compelling on edge nodes unless they are high performance and low power, and are safe, private and secure. I’ve talked about vision and speech recognition moving to the edge because it is too expensive, in power and in latency, to perform those functions through round-trips to the cloud (and may be impossible at times if a wireless connection is not available). It’s a no-brainer that privacy, security and safety are best managed locally with minimal or no need for off-device communication. Which means you need yet more complex/intelligent software running with low latency on energy-sipping devices already setup to manage those other needs.

To get improved latency, ARM believes that heterogeneous compute engines and accelerators need high-performance access to the compute cluster. This a part of what DynamIQ offers. It starts with a ground-up re-design for multi-core which they present as an evolution of big.LITTLE. All Cortex-A cores will be upgraded to this new (V8.2) architecture as will CoreLink. These will be backward-compatible with software and other systems, although existing Cortex-A cores/CoreLink will not be forward-compatible. A cluster can support up to 8 cores, and these can be heterogeneous and non-ARM, as long as they comply with open 8.2 standards. The new Cortex-A cores include multiple improvements for floating-point and matrix multiply. They didn’t get deep into the nature of the improvements in the press briefing but they did say these especially target improving AI performance on ARM-based designs by 50x.

In power management, DyamIQ provides fine-grained speed control in the CPUs, more control over power-state switching (every core in a cluster can operate in a different power state) and autonomous power management for CPU memories.

And in safety, ARM is already well-established in ASIL-D industrial and automotive applications. I would guess that getting ASIL-D signoff on that great new vision recognition sub-system will be greatly eased by integration into DynamIQ.

While I’ve stressed edge applications in this article, ARM also expects significant value for DynamIQ in server/cloud applications. Apparently, ISA enhancements have been developed in close cooperation with important partners specifically to help in this domain. ARM also expects the platform will be very attractive in networking applications thanks to reduced latency within in 8-core clusters.

When are we likely to see this technology, both in access to design teams and appearing in end-user products? ARM told us that multiple early access partners are already working with the technology. I got a mixed message (possibly I wasn’t listening carefully enough) on when the rest of us may get access or see products. I think I heard we would start to see products in 2018 and we may see access to the new standard and to technology sometime this year. But don’t take my word for that – check with your ARM rep.

You can learn a little more about DynamIQ HERE.

More articles by Bernard…


How to Design a Custom SoC with Analog, webinar from ARM and Tanner EDA

How to Design a Custom SoC with Analog, webinar from ARM and Tanner EDA
by Daniel Payne on 03-23-2017 at 12:00 pm

Leading edge SoC designs can contain billions of transistors, cost over $10M to design, and take over 18 months to deliver, but not all SoCs require that much complexity, cost and time. In fact, there is a growing class of SoC designs that integrate the popular ARM Cortex-M0 processor along with analog blocks that work with sensors to serve markets like industrial and IoT. ARM and Tanner EDA (Mentor Graphics) have teamed up to deliver a webinar next week on this timely topic that I’ll be attending and blogging about.

Overview
Custom SoCs are the new vogue: Analog is becoming smart, and there is a new wave of custom SoCs integrating analog and digital to create smaller, lower-cost products.

In this webinar, attendees will learn how custom SoCs are helping OEMs and analog silicon providers alike create smarter, smaller and lower cost products. Attendees will then learn how they can use Tanner AMS tools to combine their analog sensor with an ARM Cortex-M0 processor to create a custom SoC. We will cover mixed signal simulation of the analog block with a digital interface module, then connect that module to the Cortex-M0’s AMBA Advanced Peripheral Bus. Finally we’ll create a test program and simulate analog, digital, and software together.

What You Will Learn

  • How custom SoCs are helping OEMs and analog silicon providers alike create smarter, smaller and lower cost products
  • How to perform mixed signal simulation
  • How to integrate a custom analog component into the ARM AMBA Advanced Peripheral Bus (APB)
  • How to write and compile simple firmware to test the integration


Who Should Attend

  • Analog designer
  • Mixed signal designer
  • Engineering manager


About the Presenters

Jeff Miller is a lead strategist and manages Tanner’s analog and mixed signal product lines at Mentor Graphics. Jeff started as an engineer at Tanner Research in 2002 and became product manager of Tanner EDA in 2007.


Phil Burr is a senior product marketing manager in ARM’s CPU group. He leads a team responsible for ARM’s established CPU portfolio, helping ensure that these processors enable new and existing partners to innovate.

Registration
You may register online for the replay <a href="http://"https://ad.doubleclick.net/ddm/jump/N3442.1921981SEMIWIKI.COM/B3402872.148567848;sz=300×300;ord=[timestamp>?"”]HERE:


    Top 10 Updates from the TSMC Technology Symposium, Part II

    Top 10 Updates from the TSMC Technology Symposium, Part II
    by Tom Dillinger on 03-23-2017 at 7:00 am

    An earlier article described some of the technical and business highlights from the recent TSMC Symposium in Santa Clara (link). This article continues that discussion, with the top five updates.
    Continue reading “Top 10 Updates from the TSMC Technology Symposium, Part II”


    Samsung Should Just Buy eSilicon Already!

    Samsung Should Just Buy eSilicon Already!
    by Daniel Nenni on 03-22-2017 at 12:00 pm

    As you all know I’m a big fan of the ASIC business dating back to the start of the fabless semiconductor transformation where anybody could send a design spec to an ASIC company and get a chip back. The ASIC business model also started the smart phone revolution when Samsung built the first Apple SoCs for the iPhones and iPads.

    Today even systems companies that have been doing their own chips for years are now turning to the big ASIC companies for access to leading edge technologies and the design know-how to get them quickly into production. Cloud gateway chips for example, have become incredibly large and utilize bleeding edge technologies such as: FinFETs, High Bandwidth Memory, Ultrafast SERDES, and 2.5D Packaging.

    This brings us to the topic at hand which is a newsworthy tape-out from eSilicon in partnership with Rambus and Samsung. It really is a nice proof point of my previous blog “Succeeding with 56G SerDes, HBM2, 2.5D and FinFET” which covered the “Advanced ASICs for the Cloud-Computing Era: Succeeding with 56G SerDes, HBM2, 2.5D and FinFET” seminar at Samsung HQ.

    Before I get into the technology and gratuitous quotes it is interesting to note that two of the top three ASIC companies are tied to specific foundries. Avago/Broadcom is partnered with TSMC and IBM ASIC is now part of GlobalFoundries. eSilicon however is a switch hitter between TSMC and Samsung. So if Samsung wants to take their foundry business to the next level just buy eSilicon, absolutely.

    The monster chip in question is on Samsung 14nm LPP and includes: eSilicon[SUP]®[/SUP] eFlexCAM™ TCAMs, eFlex™ embedded memories, extended-voltage-range general-purpose I/O (EVGPIO), a silicon interposer, 28G SerDes, a high-bandwidth memory (HBM2), five different types of custom memories, HBM Gen2 PHY, interposer design and a custom flip-chip package which is based on Samsung’s I-Cube[SUP]TM[/SUP] solution. I-Cube is Samsung’s full 2.5D turnkey solution, which connects a logic chip and HBM2 memory with an interposer.

    eSilicon highlights

    • Successful 2.5D ASIC production tape-out of networking and computing chip based on eSilicon’s silicon-proven Samsung 14LPP IP platform
    • eSilicon’s end-to-end 2.5D/HBM2 solution includes 2.5D ecosystem management, HBM2 PHY, ASIC design, 2.5D package design, manufacturing, assembly and test
    • Memory IP customization for optimal power, performance, and area
    • Overdrive and super overdrive support speeds beyond 2.5GHz targeted to high-bandwidth, high-performance networking and computing applications

    And now for the gratuitous quotes:

    “I am delighted to announce 14nm network processor tape-out,” said Ryan Lee, Vice President of Foundry Marketing Team at Samsung Electronics. “This successful product tape-out was combined with eSilicon’s proven design ability in network area and Rambus’ expertise in SerDes and Samsung’s robust process technology along with I-Cube solution. This collaboration model is very unique solution which will have very big impact in network foundry segment. Samsung will keep developing its network foundry solution to be a meaningful total network solution provider aligned with its process roadmap from 14nm and 10nm to 7nm.”

    “This successful 14nm network processor tape-out was combined with eSilicon’s proven design ability in network area and Rambus’ expertise in SerDes and Samsung’s robust process technology along with I-Cube solution.” said Ryan Lee, Vice President of Foundry Marketing Team at Samsung Electronics. “Our collaboration model will have a great influence on a network foundry segment and Samsung will keep developing its network foundry solution to be a meaningful total network solution provider aligned with its process roadmap from 14nm and 10nm to 7nm.”

    “eSilicon is proud to deliver a complete 14LPP IP platform for high-bandwidth, high-performance computing,” said Patrick Soheili, vice president of product management and corporate development at eSilicon. “Working with Samsung at 14LPP, and beginning to work in 10LPP, allows us to build on our past success in HBM2, 2.5D and specialty memories at advanced nodes. This is one of the world’s first production tapeouts of a 2.5D ASIC.”

    About eSilicon
    eSilicon guides customers through a fast, accurate, transparent, low-risk ASIC journey, from concept to volume production. Explore your options online with eSilicon STAR tools, engage with eSilicon experts, and take advantage of eSilicon semiconductor design, custom IP and IC manufacturing solutions through a flexible engagement model. eSilicon serves a wide variety of markets including the automotive, communications, computer, consumer, industrial products and medical segments. Get the data, decision-making power and technology you need for first-time-right results. www.esilicon.com


    Top 10 Updates from the TSMC Technology Symposium, Part I

    Top 10 Updates from the TSMC Technology Symposium, Part I
    by Tom Dillinger on 03-22-2017 at 7:00 am

    Last week, TSMC held their 23rd annual technical symposium in Santa Clara. In the Fall, TSMC conducts the OIP updates from EDA/IP partners and customers. The theme of the Spring symposium is solely on TSMC’s technology development status and the future roadmap. Indirectly, the presentations also provide insight into the electronics industry as a whole, based upon the market segments where TSMC is focusing, with their R&D investments.
    Continue reading “Top 10 Updates from the TSMC Technology Symposium, Part I”


    SRAM Optimization Saves Power on SOC’s and in Systems

    SRAM Optimization Saves Power on SOC’s and in Systems
    by Tom Simon on 03-21-2017 at 12:00 pm

    Mobile device designers face the dilemma of reducing power and at the same time maintaining or increasing performance. Consumers will not tolerate increased battery life at the expense of performance. If it were otherwise, designers could simply dial back clock rates. Without this simple cure, the best way to reduce power for longer device operating life is to reduce supply voltage. While this is highly effective, it comes at a price.

    The choke point for voltage reduction is usually SRAM. Decreases in supply voltage have an exponential effect on memory error rates. Upsizing bit cell transistors or switching to 8T or 10T configurations can alleviate this, but these approaches will increase area and possibly lead to increased leakage losses in all operating modes.

    Once the bit cells are optimized as much as possible, the remaining factor is the efficiency of the power distribution network (PDN). If PDN losses can be minimized this acts as an effective reduction of vcc-min for the entire chip. Improvement in the PDN voltage drop at the worst endpoint will help the entire chip power performance.

    Magwel’s resistance network solver, RNi, is aimed directly at this kind of problem. It reads in the design GDS containing the full PDN. Users can select or define equipotential contacts anywhere on the network. Using information easily obtained from the foundry supplied ITF file for back-end metallization resistivities, RNi meshes and then rapidly solves the entire PDN. When it is finished, not only are the endpoint resistances available, but so are the point-to-point (p2p) resistances at all points along the net back to the specified contacts or top-level pads.

    To make it as easy as possible to interpret the results, RNi displays the resistance values in its Field View, showing colors to indicated resistance. Mousing over any metal will reveal the numeric resistance value to that point. Usually a designer will set the contacts to be the top-level pads for the net of interest. However, RNi has the flexibility to solve for resistances from any arbitrary point in the network. Users can easily create a contact /pad anywhere they choose.

    Recently a large Taiwanese memory chip company, made a breakthrough in power efficiency with RNi. Using RNi they were able to reduce the voltage drop by nearly 100mv in their SRAM PDN. Let’s look closer at how RNi was helpful in reducing the PDN resistance in critical locations. The main technique consists of visual examination using the Field View in RNi to locate endpoints on the first metal layer that are over 10 ohms. This can be done by setting the color scale so that 10 ohms, for instance, is red. Because memory chips only use few metal layers and there is a fixed pitch, when a resistive end point is found it is usually easiest to add parallel wires to the PDN to reduce resistance and carry more current.

    Connection by Abutment Leaves Gap
    Notice the sharp color change due to non-connection at abutment point

    RNi is especially useful for locating design errors that can contribute to higher than necessary end-point resistance. One of these is gaps in abutment connections. These gaps are hard to see and can go unnoticed. In the RNi Field View they are easy to identify because of an abrupt color change along a wire. The resistances, and display color, along the broken connection do not change monotonically, but rather have a sharp change that is readily visible.

    Missing Connections Create Large Voltage Drops on PDN’s.
    Note the red horizontal nets on the left side, as their resistance goes up

    The other usually hard to find problem is missing vias. A missing via can escape detection because other parallel connections provide continuity; but they can have a major impact on PDN resistance. In RNi, the Field View will show a sharp contrast in the colors of the crossing wires.

    Resistance extraction tools can help identify which endpoint has the highest interconnect resistance. However, PDN’s have unusual topologies with large size, many self-intersections, and wide and thick metals, metal slotting and parallel paths. All of these traits make it harder to efficiently and accurately extract metal resistance. This difficulty only grows when it is important to identify the specific sections of the PDN’s that are contributing most to endpoint voltage drop.

    Alternatively, there are 2.5 and 3D field solvers that can precisely extract resistance. The downsides of these solvers are that they are difficult to set up, run slow, and often only output a lumped resistive netlist. Instead what is needed is a tool that requires no specialized training and does not convert the PDN into a lumped netlist.

    There is now an effective tool for locating portions of PDN’s with higher voltage drop. Every millivolt saved on the PDN translates into lower operating voltage and significant power savings without compromising performance. Magwel’s website contains more information on RNi and their other tools for improving reliability and yield – such as ESDi for ESD verification, and PTM for dynamically modeling power transistors typically found in PMIC’s.


    Quantum Resistance on the Edge

    Quantum Resistance on the Edge
    by Bernard Murphy on 03-21-2017 at 7:00 am

    I’ve written recently about the trend to move more technology to the edge, to mobile devices certainly but also to IoT edge nodes. This is based particularly on latency, communications access and power considerations. One example is the move of deep reasoning apps to the edge to handle local image and voice recognition which would be completely impractical if recognition required a round trip to the cloud and back.


    Another example concerns quantum computing and its potential to undermine cryptography, so that anything you think is secure (on-line shopping/banking, personal medical information, the national power grid, national security) will be easily accessed by anyone who can afford a quantum computer (nation states and criminal enterprises). If this is possible at the desktop/cloud level, it should be even more of a risk in mobile and IoT devices.

    Conventional cryptographic methods rely on the significant difficulty of solving a mathematical problem, such as factoring an integer computed as the product of two large prime numbers. While the complexity of these problems can be arbitrarily high, a combination of clever mathematics and distributing the solver over massive networks of PCs has broken some impressively large cryptography keys. Cryptographers have been able to crank up the size of the key to stay ahead of the crackers, but quantum computing threatens to break though even that line of defense. (Other techniques based on elliptic curves and discrete logarithms are similarly limited.)

    The mathematics of determining how hard it can be to crack an algorithm are challenging and in practice lead to upper bounds based on best-known cracking algorithms, those bounds being progressively refined over time as improved methods are discovered. The best-known solution to the general factorization problem has a complexity (in terms of time taken to solve the problem for a given key-size) which grows at a rate slightly slower than exponential with key-size.

    Enter quantum computing (QC). Skipping the gory details, the point about QC is that for a give number of quantum bits, it can evaluate for all possible settings of those bits at the same time. So if the QC had N bits, it could evaluate all 2[SUP]N[/SUP] possibilities in parallel. This makes QCs able to solve problems of exponential complexity in reasonable time. Certainly, the integer factorization problem (widely used in production cryptography today) would be completely exposed to machines with a large enough number of quantum bits.

    So, RIP cryptography? Not so fast. I wrote about a year ago on lattice-based cryptography methods, specifically designed to be hard for QCs to crack. That’s the thing about math – you invent a way to crack a class of problems, then the mathematicians come up with new algorithm which defeats your invention. Support for this method was added to OpenSSL back in 2014, though this seems to have eluded many people who write on QC threats to encryption.


    But while that solution is good for desktops and the cloud, it’s more problematic for mobile and edge devices which are much more power- and latency-sensitive (and therefore, as mentioned earlier, prefer to avoid or minimize tasks requiring cloud communication). To address this need, SecureRF now offers a solution using a Diffie-Hellman-like authentication protocol with a 128-bit key, but resistant to known QC attacks; it is also 60x faster than elliptic curve cryptography and uses 140x less energy. The digital signature algorithm is based on group-theoretic methods (built on braid groups), known to be quantum-resistant. Arrow’s Chameleon96 Community board hosts this solution today.

    A word on quantum-resistance. Complexity bounds at these lofty heights are difficult to find and prove. What is known is that QC algorithms, like Schor’s and Grover’s algorithms which can crack non-quantum encryptors, can be defeated by resistant algorithms. Also, all quantum-resistant approaches I have seen are at least NP-hard, which means they are expected to be very hard to solve. That doesn’t prove they can’t be cracked by some QC method but no-one knows of such a method and QC has very little wiggle room. If a resistant algorithm is even a little bit super-exponential in complexity, then the effort to crack using QC increases faster than linearly as key-size increases, taking us back to where we started.

    One last technology note for QC key-cracking enthusiasts. Commercial QC doesn’t yet exist beyond relatively small word sizes (e.g. at Google) though IBM and others claim they will release systems within the reasonably near future. Then again, the NSA is known to be working on QC and China has already launched a satellite to support quantum key exchange over long distances. Different technology, but it’s not a big stretch to assume they’re also working on QC. And we must assume Russia is doing the same. So yeah, you should probably assume that at least nation-state hackers will be able to crack your non-quantum-resistant cryptography today or very soon and therefore all responsible systems should plan to be resistant to quantum attacks. Solutions from vendors like SecureRF will necessarily be required in any system used in commerce, automotive, medical and other secure applications.

    You can read the SecureRF press release HERE and get more information on their products HERE.

    More articles by Bernard…


    Joe Costello and Other Luminaries Keynote at DAC

    Joe Costello and Other Luminaries Keynote at DAC
    by Daniel Payne on 03-20-2017 at 12:00 pm

    The most charismatic EDA CEO that I have ever witnessed is Joe Costello, who formed Cadence by merging SDA (Solomon Design Automation) and ECAD (known for DRC with Dracula). You will be impressed with his Monday keynote at DACon June 19th, starting at 9:15AM. Joe has long since left the EDA world and is currently the CEO of a company called Enlightedthat is bringing the IoT to smart buildings, and yes, they actually have big-name customers.

    Monday Keynote
    IOT: Tales from the Front Line
    Monday, June 19, 9:15AM – 10:00AM

    Joe Costello, Chairman & Chief Executive Officer
    Enlighted, Inc., Sunnyvale, CA

    There is a lot of talk about the potential of the Internet of Things. But what is happening on the front lines? Where are the examples of real impact?

    Enlighted CEO Joe Costello will discuss how the IoT is impacting commercial real estate, the largest asset class in the world, by giving buildings a “sensory system” akin to a human body. Once deployed, there are a multitude of new opportunities to improve business processes thanks to granular data that has never been available before.

    Learn how this technology is currently being developed and applied, the challenges, along with predictions for the future of IoT in commercial buildings.

    Tuesday Keynote
    The Rise of the Digital Twin
    Tuesday, June 20, 9:00am – 10:00am

    Chuck Grindstaff, Executive Chairman
    Siemens PLM Software Inc., Plano, TX

    A new concept is sweeping the industrial machine market: the digital twin. Using high performance software, a digital copy of the machine is created and developed simultaneously with the actual physical product. This allows design ideas to be quickly tested and constantly refined throughout a machine’s entire lifecycle.

    Sound familiar? EDA has of course been doing this for decades in electronics with integrated circuit design, even as designs became staggeringly complex with billions of transistors.

    In his keynote, Chuck Grindstaff, Executive Chairman of Siemens PLM Software, will explore the crucial role of digitalization in assisting engineers to design, simulate and verify products that increasingly incorporate both mechanical and electronic capabilities. For example, the Industry 4.0 initiative is digitally transforming factories, using sophisticated electronics to boost efficiencies from concept through all stages of the product life cycle. Another area undergoing massive transformation is the automotive industry, where today’s cars are becoming digital platforms on wheels, with the electronics approaching 50% of the BOM costs. Join Mr. Grindstaff as he examines the fertile new intersection of electronics and mechanical design and how it will transform both industries.

    Wednesday Keynote
    Accelerating the IoT
    Wednesday, June 21 | 9:00am – 10:00am

    Tyson Tuttle, Chief Executive Officer
    Silicon Laboratories, Inc., Austin, TX

    The Internet of Things (IoT) has been hailed as the next frontier of innovation in which the everyday “things” in our homes, offices, cars, factories and cities connect to the Internet in ways that improve our lives and transform industries. The IoT market is poised to exceed 75 billion connected devices by 2025, but several challenges remain in achieving the market’s full potential. Tyson Tuttle, CEO of Silicon Labs, will explore what it will take to accelerate the promise of the IoT. In his keynote, Tyson will consider the market imperatives and engineering challenges of adding connectivity to electronic devices, including cost, ease of use, energy efficiency, interoperability, future extensibility, and security. Addressing these challenges will unleash the limitless possibilities of a more connected world.

    Thursday Keynote
    EmotionTechnology, Wearables, and Surprises
    Thursday, June 22 | 9:10am – 10:00am

    Rosalind Picard, Professor
    Massachusetts Institute of Technology, Cambridge, MA

    Years ago, I set out to create technology with emotional intelligence, demonstrating the ability to sense, recognize, and respond intelligently to human emotion. At MIT, we designed studies and developed signal processing and machine learning techniques to see what affective insights could be reliably obtained. In this talk I will highlight the most surprising findings during this adventure. These include new insights about the “true smile of happiness,” discovering new ways cameras (and your smartphone, even in your handbag) can compute your bio-signals without using any new sensors, finding electrical signals on the wrist that reveal insight into deep brain activity, and learning surprising implications of wearable sensing for autism, anxiety, sleep, memory, epilepsy, and more. What is the grand challenge we aim to solve next?

    Summary
    DAC is still the premiere event for everyone in the EDA, semiconductor IP, SoC and foundry business to attend, so I hope to see you in Austin this summer enjoying the Keynotes, technical papers, exhibit and many networking events.


    Recipes for Low Power Verification

    Recipes for Low Power Verification
    by Bernard Murphy on 03-20-2017 at 7:00 am

    Synopsys hosted a tutorial on verification for low power design at DVCon this year, including speakers from Samsung, Broadcom, Intel and Synopsys. Verification for low power is a complex and many-faceted topic so this was a very useful update. There is a vast abundance of information in the slides which I can’t hope to summarize in a short blog so I’ll just highlight a few points that stood out for me. I suggest you get your hands on the slides (Tutorial 6 from the DVCon 2017 set) for more detailed study.

    The tutorial kicked off with a review (by Amol Herlekar of Synopsys) on trends in low-power design based on global SNUG 2016 surveys. One surprising observation is just how many advanced power saving techniques are being used across a wide variety of applications, from cloud computing to PCs, digital home, mobile, auto, IoT, medical, mil-aero, industrial and test and measurement. I remember when, not so long ago, many designers thought the world of power management was bounded by clock gating, but now the survey shows widespread adoption of power gating, sequential clock gating, DVFS (wow!), state retention and use of multiple power and voltage domains. The bulk of respondents were using 10 or less voltage domains and 10 or less power domains, but that’s still a lot.

    Users continue to migrate to UPF (>70% within a year) and especially UPF 2.0, while other formats continue to lose mindshare. And while UPF has made an impossible task possible, there are still many questions around best methodologies – how to deal with analog and other hard IP, how best to organize LP verification, how to get good coverage of low power states and transitions and how best to verify a PG netlist (remember all those switches for power and voltage gating?) Follow-on tutorials provided advice for verification engineers in the trenches on how they approach these problems.

    Amol also presented the Synopsys top-level recommendation for a power verification flow – always get static verification clean, then proceed to dynamic verification and do this at RTL, post-synthesis and post-layout. Other speakers largely echoed these points (with a lot more elaboration on details), starting with Vikas Gupta from Samsung, who provided guidelines specifically on static verification. He talked about hierarchical UPF verification and the care required in getting this right, such as managing multiple instances of a block potentially appearing in different power configurations. He also stressed the importance of fully validating the UPF at each stage (RTL, post-synthesis, post implementation). And he stressed that in their environment, waivers are not allowed; you must get the UPF clean the right way, not by fudging.

    One of my take-aways from this section was that static verification is manageable if (and possibly only if) you follow a disciplined approach to constructing and checking the UPF at each stage. My other take-away was that effectively what we have today solves half of the power intent problem; from UPF we can verify power intent, but the assembly of that intent is still (for most) largely manual. Users need tools/templates that will help build UPF following best practice guidelines for assembly closer to correct by construction.

    Broadcom followed with a discussion on verification challenges. This presentation was from YC Wong; if anyone can stress-test an EDA concept or tool, YC and his team will get there first, so I’m unsurprised that he got this piece of the tutorial. Incidentally, he called out VC LP, VCS-NLP and Verdi use in their flow. For YC, it’s ultimately it’s all about PG netlist validation. Not the way many of us think about it but you can’t fault the reasoning. The PG netlist contains all the power and voltage switches and other power connections which are only implied (through UPF) in earlier gate/RTL representations. So his team starts by building a PG netlist (mapped, no optimization) even before they hand off to synthesis, and they run static verification on that netlist. And of course, they repeat this on the LVS netlist before checking LP strategies.

    He also re-emphasized that you should do everything (at each stage) to maximize static verification before you get into dynamic verification. Static in this context isn’t just UPF linting versus RTL/gate netlists. It also includes formal and X-prop analysis. Especially when you get to PG simulation, it is way too expensive to be finding problems that could have been found statically. He particularly stressed the value of X-prop analysis in finding potential sequencing issues before you get into dynamic verification.

    Satya Ayyagari from Intel closed with a discussion on low power simulation (where they use VCS-NLP and X-prop). Satya gave a very detailed description of strategies to approach different kinds of IP and to approach full-chip LP simulation, both on a simulator and in emulation (and even prototyping). Attention to gate level simulation was interesting, for mission mode, for power sequencing and for scan testing where scan paths cross power domains. Emulation was stressed as important for full-chip power verification across use cases, but he mentioned a point I didn’t realize – that he has seen no good method to handle power state tables in emulation. Satya suggested as a closing point that LP modeling would benefit for some level of understanding of voltage in logic verification; not the AMS kind of understanding which would be too slow, but enough to trap potential mismatches in level shifter and voltage switching expectations.

    Srinivasan Venkataramanan in the audience asked a bunch of questions. I learned afterwards that he works at a verification consulting group in Bangalore. He liked a topic on complex power switches, raised by YC (which will require further extensions to UPF). He also liked the detail (in the trenches he called it), especially from the Intel speaker. And he liked that Synopsys acknowledged the need for extended assertion capabilities beyond standard SVA. Overall he said he really found high value in this tutorial, a notable endorsement from an independent member of the audience.

    You can find the slides HERE. To get these you will need to have registered as a participant and you will need your badge ID from the event. Or you can just talk to a colleague who already downloaded the slides 😎.

    More articles by Bernard…


    Tesla’s Cat in the Bag

    Tesla’s Cat in the Bag
    by Roger C. Lanctot on 03-18-2017 at 8:00 pm

    Some day soon, maybe this year or next, Tesla Motors is going to let the cat out of the bag that its cars are not only connected but are also subject to remote control. Remote control isn’t the sort of feature that consumers look for in their personal transportation, so it isn’t likely to be something Tesla is going to bring up. It also has a range of security, privacy and liability implications that make it a sticky topic to discuss.


    Ethical and not-so-ethical hackers have already demonstrated unauthorized remote control of cars including Tesla’s, FCA’s Jeep Cherokees and OnStar-equipped vehicles from General Motors (on “60 Minutes”). In fact, OnStar offers “remote vehicle slowdown” as an anti-theft feature as part of OnStar which works through a cooperation with law enforcement agencies.

    The topic of remote control is increasingly arising in connected car and automated driving conversations as companies, such as Local Motors, introduce driverless shuttle systems in Berlin and elsewhere, which come with remote monitoring. In fact, consumers are increasingly being offered smartphone applications that enable the equivalent of short-range vehicle remote control for parking cars in tight spots. Tesla already offers this. BYD in China has been showing off remote control of a car via smartwatch for several years.

    At Mobile World Congress 2017 in Barcelona Ericsson demonstrated the use of 5G network technology for remote control of a car driving on a distant racetrack. The car was not moving quickly, but the low latency communication enabled by 5G connectivity was used to demonstrate remote control as a feature.

    Operators of fleets of driverless vehicles realized early on that these vehicle fleets would require remote control. The integration of cameras on cars has enabled all-around-view technology combined with high speed LTE wireless networks further assisting the development of remote control as an application.

    The concept came up during a panel discussion at CityCarSummit in Berlin yesterday, with an executive from Local Motors, which has become a fleet management company managing its Olli driverless shuttles, questioning whether car companies are capable of managing fleets. A Daimler executive sharing the stage at the time smiled wryly at the comment behind the back of the Local Motors exec. (Daimler manages both its Car2Go car sharing fleet and operates Fleetboard for commercial trucks. Daimler is also in the business of making and operating buses.)

    Car companies have long known that if their cars were connected the car companies will some day bear the liability and obligation to take control of one of those cars remotely, if they could, should that vehicle become involved in criminal or life-threatening activity. Let’s call it a moral obligation, because it’s never happened.

    Law enforcement officials already take advantage of embedded connections to track vehicles, as occurred in the case of the Boston Marathon bombers who stole a telematics-equipped Mercedes. But remotely stopping a connected car in the midst of committing a terrorist act, for example, is a circumstance that has yet to arise.

    Tesla is courting car insurers that are increasingly inclined to offer discounts to reward Tesla owners for driving their vehicles in autopilot mode – a behavior which Tesla CEO Elon Musk has claimed results in fewer crashes and insurance claims based on vehicle data collected by the company. But the prospect of remote control introduces an entirely different connected car value proposition for insurers.

    Car makers building in vehicle connections are currently wrestling with cyber-security issues meaning they are simultaneously introducing an attack surface while trying to prevent attacks. But preventing intrusions and enabling remote control are not mutually exclusive.

    Once a car maker is connected to its cars it has become a fleet operator and thereby bears some responsibility for the knowledge of how its vehicles are being operated. It is the automated driving proposition that introduces the need for more active remote monitoring and control.

    The legal issues can be sticky and vary from country to country. You can get a flavor of the debate from this report regarding OnStar:

    https://www.techdirt.com/articles/20170116/09333936490/law-enforcement-has-been-using-onstar-siriusxm-to-eavesdrop-track-car-locations-more-than-15-years.shtml

    No car maker has gratuitously taken control of its cars remotely. OnStar’s remote slowdown feature is already considered to be fairly mundane – even though it is the only such offering on the market in the world. Though mundane for OnStar today, remote control is controversial as a concept.

    If hackers can take remote control of a car, then car makers will be expected to have the same capability – particularly in the case of self-driving cars. In fact, car makers currently deploying intrusion detection software such as Harman’s Towersec, Argus Cyber Security, NNG’s Arilou or even QNX’s Certicom will need to develop the means for remotely restoring control and responding to those intrusions. The automotive industry has yet to sort these issues out. As of today, if a car maker detects a cyber attack on a vehicle, the customer may not even be notified.

    It is clear that car companies installing connections know more and more about how their vehicles are being operated and how they can be compromised. What is missing are the procedures and protocols for responding to the information that is being gleaned regarding driving behavior, vehicle performance and intrusion detection.

    The fleets of driverless vehicles envisioned by “futurists” and debated at events such as CityCarSummit are proliferating – which means that remote control of vehicles is about to become a growth industry. In essence, every car maker – whether that auto maker likes it or not – is in the process of becoming a fleet operator with remote control over both driven and driverless vehicles.

    Sooner or later Tesla will find a way to convey this “news” to its owners as an attractive new feature rather than a creepy technology over-reach. My favorite application will be the insurance company notification of an impending hailstorm which will send owners racing to remotely pilot their cars to covered parking.

    Law enforcement certainly welcomes the prospect of remote control as a crime fighting tool – along with the potential to subpoena access to microphones built into cars for hands-free phone systems to listen in on suspects’ conversations. The only question that remains is whether consumers will regard remote control as a benign enhancement or as an invasion of the vehicle snatchers. Make no mistake, remote control is coming to a connected car in your driveway or garage sometime soon. In fact, it’s probably already there. It’s 10 p.m. Do you know where your car is?