CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Artificial Intelligence for Cyber Security

Artificial Intelligence for Cyber Security
by Alex G. Lee on 10-17-2016 at 12:00 pm

Since the United States is such a large market for products and services from nearly all technology innovations, a patent counting for growth in patenting over a period of times in the US can be a good measuring tool for monitoring the evolution of technology innovations. Following figure shows the growth trends for the artificial intelligence (AI) for cyber security technology innovations base on the patent research of nearly 100 published applications and issued patents by the USPTO as of 3Q 2016.

The figure indicates that very active AI for cyber security technology innovations started from 2013 and keep increasing (since there is usually a time lag between the priority date and publication date by two to three years, the number of patents in 2015 is not fully accounted as of 3Q 2016).


Number of patents for an assignee divided by the total number of patents gives what percentage the assignee contributes to the AI for cyber security technology innovations. Ranking the assignees by the number of patents is thus an important part of visualizing the innovations landscape. Following figure shows the top assignees for the deep learning technology innovations. The top AI for cyber security technology innovators include Cisco, Microsoft, IBM, Wistron, HP, Cyberpoint International LLC, and Cyberricade.

Following figure shows the key applications of the AI for cyber security technology innovations. The figure shows that Malware Detection is the most used application of the AI for cyber security technology innovations.

As an interesting innovation regarding the AI for cyber security, US20160253495 illustrates the system of detecting the anomalous behavior in the cyber-physical system that are susceptible to both physical and cyber attacks exploiting the machine learning techniques. A cyber-physical system is a system that supports efficient control and determination by interlocking the real world and the virtual world. A cyber-physical system is a hybrid system in which embedded systems are combined based on a network, and has the characteristics of both continuous elements such as physical elements and discrete elements such as software elements.

Examples of cyber-physical system applications include the smart electric grid, smart transportation, smart buildings, smart medical technologies, smart air traffic management, and smart manufacturing. Cyber-physical systems are the key enabling technologies of the Industrial Internet of Things (IIoT). The IIoT will produce huge opportunities for companies in Aviation, Oil and Gas, Transportation, Power Generation and Distribution, Manufacturing, Healthcare and Mining industries.


CEO Interview: Charlie Janac of Arteris

CEO Interview: Charlie Janac of Arteris
by Daniel Nenni on 10-17-2016 at 7:00 am

Charlie Janac ArterisIP

When Charlie Janac talks, people listen, absolutely. Charlie’s 30 year career spans EDA, IP, semiconductor equipment, nano-technology, and venture capital. For the last 11 years he has been CEO of interconnect IP provider Arteris who invented the industry’s first commercial network on chip (NoC) SoC interconnect IP solutions and continues to lead the industry. In 2013 Charlie engineered a landmark acquisition deal with Qualcomm while retaining the right to license, support, and maintain the existing Arteris product lines.

What are some of the latest developments at Arteris?
I am happy to report that in 2016 Arteris will be at almost at the same revenue level and engineer staffing level as we were in 2012, the year prior to our technology asset transaction with Qualcomm. We have shipped a major new product every year for the past three years – Interconnect resilience for mission critical SoCs in 2014, automated interconnect timing closure in 2015 and our Ncore cache coherent interconnect in 2016. We are confident these technology deliverables have come at a more rapid rate then our competition. Furthermore, these new interconnect products have been designed into some of the leading SoCs for mobility, automotive, networking, SSD and consumer applications. At the same time, our FlexNoC non-coherent interconnect continues as the leading market share interconnect IP in its category with two new product releases this year alone. The best indication of new product technology features and quality is customer adoption. We have added nearly 30 new customers since 2013 as well as received major reorders from our existing licensees. Arteris continues to deliver quality interconnect IP products out of engineering and it continues to get them broadly adopted by semiconductor and system OEM companies.

What is the importance of interconnect IPs? Is that importance growing?
SoCs are now assembled out of internal and commercial IP blocks and one of the important differentiators is how that IP is assembled. While the majority of the IP blocks are proven, they have relatively fixed functionality – it is the interconnect IP that changes many times during the course of a project and almost always changes between projects. Therefore, it is mission critical for SoC implementation companies to have the most efficient, highest performance, and cost effective interconnect IP in their designs. The 10/7nm generation of designs will also lead to yet another level of increasing interconnect complexity and so will the move to 2.5 and 3D silicon.

The importance of SoC interconnect technology is growing with each generation of new system-on- chip devices. The network-on-chip (NoC) type interconnect that we pioneered was widely adopted in the 40nm generations of SoCs between 2008 and 2009. The 16/14nm FinFET-type SoCs of today represent another inflection point in terms of architectural complexity. IoT chips are requiring a step function increase in battery life which in turn is driving power management complexity. This is yet another important driver for our interconnect IP.

Do you see a shift in IP market segments?
We are absolutely seeing shifts in IP market segments. Back in 2013 we had 20 out of 23 mobility SoC makers license our FlexNoC interconnect IPs. Today it is seven out of 10. The mobility market did not shrink but it became much more consolidated and it will consolidate still further. The growth of this market is still attractive but it has slowed significantly.

Conversely, Arteris had one automotive SoC licensee in 2012 while today it is nine. The automotive SoC market is being driven by two developments: (1) much greater use of electronics, driven by self-driving and over-the-air updating capabilities, and (2) the replacement of multiple microcontrollers by fewer and larger SoC-type semiconductors. The car is becoming one of the most useful IoT devices. Mobility is still the larger SoC market but automotive is now and will continue to be the fastest growing.

Another set of IP market shifts is related to geography. While semiconductor growth has slowed in many parts of the world, growth in China is accelerating. This is partially driven by government investment and partially driven by increasing domestic consumption of electronic devices. For every company that has merged with another in the semiconductor world, there is another reasonably sized company starting in China. As a result, on the global scale, the number of SoC projects should remain relatively stable while volumes continue to increase.

Is the ARM acquisition by Softbank good or bad for other IP companies?
It may be to too early to tell what impact Softbank’s ARM acquisition will have, but I think it will be relatively neutral compared to ARM’s acquisition by either a semiconductor or systems company. That is a good development. However, Softbank paid $32 billion for a company that has $1.6 billion in revenue, so therefore, it’s safe to assume some things will change as a result. We have to watch how things evolve. My hope is that the ARM architecture continues to be an open ecosystem as it has been in the past, because that has fueled innovation and growth.

What is the significance of the self-driving car for companies like Arteris?
Arteris is heavily focused on needs of our automotive customers for both functional safety features and ISO26262 compliance and documentation. In my opinion, the automation technology used in assisted driving vehicles will ultimately have even greater impact on society than the smart phone. The impact will not only be seen in cars but also trucks, drones, public transportation vehicles and vehicle types we cannot even fathom yet. The self-driving car is an immensely complex system and so the industry has to evolve over time. The automated highway driving scenario is reasonably close to being solved but the city driving scenario represents another level of complexity. We will see increased investments in the electronics that will underpin the infrastructure to support this evolution in ways that we cannot completely predict today. Ultimately, there may be some segregation of automated and human-controlled vehicle traffic.

The automation-assisted vehicle are mission critical systems that must work as well as we can possibly make them. Therefore, companies such as Arteris have to deliver functionally safe IP that supports the safety objectives of the transportation industry. One of our technology directions is to deliver resilient interconnect IP that is tolerant of errors due to environmental radiation and manufacturing flaws. This technology has to be backed up by documentation and analysis for standards compliance, and working silicon must be proven in the field. All of this requires a large investment by the semiconductor IP company, so you have to see significant customer adoption to be able to profitably sustain these kinds of developments.

There are many opportunities for innovation in this segment. Today, we are delivering a resilient, fault-reporting interconnect, but ultimately, we need to get to a truly fault tolerant interconnect..

What are some of the technology challenges posed by FinFET technologies?
One of issues that has been increasingly troublesome for customers is timing closure. It has become so complex that it is putting delivery schedules of major SoCs at risk. When you cannot go across the entire chip from initiator IP to a target IP in one clock cycle, you have to insert repeaters or pipelines. Each pipeline can be made up of several choices of register configurations. And a complex SoC can have 6,000 factorial (2.68 x 102068 ) pipeline choices or more. This level of complexity exceeds the human level of optimization so the process must become more automated. The benefits of automation of timing closure includes saving months of effort that can impact SoC delivery schedules and a timing closure scheme that is not over engineered in terms of area, power and latency, resulting in lower R&D and SoC unit costs. Automated timing closure is enabled by NoC type interconnects because it separates the interconnect IP from other IPs in the design, allowing a separate interconnect timing closure process. From everything we see, the days of manually closing timing for complex SoCs are quickly coming to an end.

Also Read:

CEO Interview: Marie Semeria of LETI

CEO Interview: Geoff Tate of Flex Logix

CEO Interview: Xerxes Wania of Sidense


Vidyo Aims To Disrupt Video Banking After Seeing Success In Healthcare And Defense

Vidyo Aims To Disrupt Video Banking After Seeing Success In Healthcare And Defense
by Patrick Moorhead on 10-16-2016 at 4:00 pm

Commercial video services are a funny thing- they seem to go through ebbs and flows of industry excitement. One day it seems boring and the next thing you see is live video from a drone, Google releasing Duo, a patient traveling on a dogsled to receive care from a doctor a 1,000 miles away… and then everyone gets excited again. I worked for AT&T in the early 90’s so I’ve seen my share of video excitement curves.


Image credit: Shutterstock

I’ve been tracking and writing about Vidyo for years now—they’re a unique video IP and platform company with a unique strategy. They were the first in my book to deliver the highest quality, scalable video across low bandwidth using industry standard servers and networking, and compete with Cisco Systems, Microsoft, Polycom and Huawei Technologies. Vidyo somehow keeps finding ways across various industries to disrupt the big-name competitors with the VidyoWorks platform in telemedicine and healthcare, as well as defense. It’s looking like Vidyo’s next big disruption will be targeted at the video banking market. Let me provide some context, first.

Video gets hot again driven by PaaS
Although the Video PaaS market is currently only sitting at around $44M, IDC predicts it will explode to over $1.7B by 2020—in other words, video is suddenly hot again. Listen, we do custom forecasts and don’t always agree with IDC’s forecasts (we don’t measure the Video PaaS market) and can’t vouch for that number, but I do believe it’s massive. I mean massive growth will come from the enablement of developers to quickly embed HD quality real-time video into any app, and deploy at Web Scale—moving video out of the conference room, and into apps, business processes or workflows, to IoT devices (such as NCR ATMs, kiosks, robots).

First healthcare and defense, now banking

Vidyo customers are already using the company’s APIs and SDK to enhance the customer experience across a variety of applications like telehealth, field services, and defense, just to name a few. And while legacy vendors have been saying for years that they will embed video directly into the Electronic Health Record (EHR) systems, Vidyo customers enjoy it today, without the need to upgrade their network. I now really want to focus on what what’s going on in the realm of video banking.

Like many other industries, banks have gone through a dramatic digital transformation in the last decade. Their brick-and-mortar locations are not totally obsolete, but now online banking is undeniably king. This shift has done a lot to improve customer experience through convenience, but the flipside is that many banks lost the customer loyalty that comes with face-to-face banking at their physical locations. One answer to that, of course, is face-to-face video.

All the cool banks are doing it and the U.S. is behind

International banks of all sizes in Europe, Asia, and Latin America have already jumped on the video bandwagon, from banks handling high net wealth like Barclays, to those specializing in everyday, general banking like IndusInd Bank. According to Vidyo, their customers currently include 6 out of the 25 largest banks in the world, as well as 5 which are the largest banks in their home country.

U.S. banks are running behind their international counterparts in the adoption of face-to-face video banking, where banks have reported doubling the customer loyalty or Net Promoter Score (NPS) and customers have requested video banking and video-enabled customer engagement as their primary form of interaction with the bank.

Through Vidyo’s new PaaS solution, the risk and effort to video-enable apps has never been this low and the scalability has never been this high. Through a very well-documented and simple API and cloud service, banking customers can add their own video capabilities. Also consider that enabling this doesn’t take some huge network overhaul because of the video scalability. Vidyo was founded on scalable video technology so good, even Google partnered with them.

The risk of not video-enabling the bank
Some banks won’t ever make the plunge into video-enabling their services, services like customer service, mortgage, real estate, small business services and those for high net wealth individuals. The risk of not doing this seems pretty clear to me, and very similar to all companies who don’t align with ways their progressive and future customers operate, and that is a losing market share scenario. What happened to Blockbuster by not enabling video streaming? What happened to the CD and record distributors? You get the idea.

If you’re thinking, “people don’t want to do video all the time”, you’re probably missing the point. Consumers want to self-select their method of interaction based upon their desired level of engagement. Some banking consumers want a web page, some prefer chat, others prefer brick and mortar- but many want the convenience and depth of high-quality video.

Wrapping up
I agree with Vidyo CEO Eran Westman when he said in a recent press release that “video collaboration makes a meaningful difference in in a company’s digital transformation.” I agree that this couldn’t be more true when it comes to modern banking. Face-to-face video has the potential to improve customer experience and loyalty, while simultaneously decreasing time-to-delivery—it’s simply good for the bottom line.

I believe Vidyo’s unique strategy and offerings (including their original VidyoWorks platform, as well as Vidyo.io have earned them a place at the table alongside their bigger name competitors in the market like Cisco Systems and Microsoft, and I think their current, expanding banking customer base is testament to that. If the demand for Video PaaS grows as projected over the next several years, things are looking pretty good for Vidyo.


The Secret Plans of Mark Rosekind & Donald Trump

The Secret Plans of Mark Rosekind & Donald Trump
by Roger C. Lanctot on 10-16-2016 at 12:00 pm

Donald Trump says he has a secret plan for defeating ISIS in Syria, but says it would be self-defeating to share that plan with the American electorate and, presumably, ISIS itself. Administrator Mark Rosekind says the National Highway Traffic Safety Administration has a plan for reducing highway fatalities to zero in the U.S., but that the solution won’t arrive until 2046.

I like Mark Rosekind. Mark Rosekind brought a breath of fresh air to a regulatory agency struggling to shrug off a reputation of suffering from “capture” by the industry it was created to oversee. Rosekind was given two years to make his mark. Mark Rosekind is starting to sound like an evasive politician.

This is unusual behavior for Mark, who has been one of the most direct Administrators NHTSA has ever seen. His insistence on 100% completion of vehicle recalls stands out among many initiatives taken on by NHTSA and the industry (with NHTSA goading) on his two-year watch.

But Rosekind’s reckoning has arrived in steadily rising highway fatalities. He may have two solutions. But they’re not in the current script.

Following a year, 2015, when the annual total of fatalities spiked 7%, interrupting nearly a decade of declining fatalities, the agency is now facing a 10% spike in highway fatalities in the first half of 2016. The agency has taken action on self-driving car guidelines, is expected to release “phase two” driver distraction guidelines, and may even gain President Obama’s blessing for vehicle-to-vehicle communications rule making. But none of these initiatives will stop the current bleeding on U.S. roadways.

The 10% increase in fatalities in the first half of 2016 comes against an increase in miles traveled of only 3%, suggesting that something sinister and deadly is afoot. It’s easy to see a range of sources contributing to the carnage including:

  • A shift in vehicle sales to SUVs, pickup trucks, crossovers and generally larger cars as sales of more modestly sized sedans shrink while fuel prices plunge.
  • The onset of ever more powerful engines and increasing speed limits.
  • The onset of vehicles with powerful but noiseless electrified powertrains.
  • The proliferation of smartphone connectivity systems contributing to in-cabin confusion and distraction including the misuse or disuse of these systems.
  • Variable and often confusing infotainment system interfaces overall.
  • Older cars – as consumers hang onto cars longer as new car prices rise.

In sum, drivers are being given larger and more powerful vehicles with increasingly distracting user interfaces and mobile devices with which to cope. There isn’t a lot that regulators can do in a short period of time, especially when most of the options on the table at the close of the Obama administration will have little or no impact in the short-term.

Two measures not in the current NHTSA playbook would have an immediate impact:

[LIST=1]

  • Lower the legal blood alcohol content level to 0.02 (or zero!)
  • Prohibit the TOUCHING of a mobile device while driving

    NHTSA is quietly working on a program called DADSS for Driver Alcohol Detection System for Safety. The program calls for alcohol detection systems capable of disabling cars to be built in as optional or standard equipment. Bringing this technology to market will take years.

    Lowering the legal blood alcohol content level to 0.02 from the current 0.08 in the U.S. will have the immediate impact of bringing zero tolerance to the question of drinking and driving. One third of all annual highway fatalities in the U.S. are attributed to alcohol. The current 0.08 allowable blood alcohol content level is an invitation – a temptation – for disaster. It is time to dial it down. The current fatality rate of 100 deaths/day in the U.S. is an embarrassment, a national tragedy, a shame, and a call to action.

    Similarly, the current U.S. policy of tolerating and fostering 50 different state-level approaches to prohibiting texting or talking while driving has only served to confuse drivers further. These laws have also proven difficult to enforce – even as law enforcement officers have gotten more clever about checking smartphone usage by drivers at the scenes of crashes.

    Better to take the European approach here and bar the touching of the phone or other mobile devices altogether while driving. This is easier for all drivers to understand and for law enforcement to apply.

    With 100 people dying every day and thousands injured, we can’t wait until 2046. We need to save lives today. And that’s no secret.


  • 5 Best Practices For Developing Secure IoT Solutions

    5 Best Practices For Developing Secure IoT Solutions
    by Padraig Scully on 10-16-2016 at 7:00 am

    Security is often an afterthought when developing IoT solutions. Security features are commonly cut from initial designs to accommodate additional device functionality. However, security needs to play a central role in IoT projects if we are to secure the Internet of Things.

    The process of developing secure IoT solutions was recently analyzed as part of an industry white paper we published with the title “Guide to IoT solution development”. In the paper, we discuss the IoT Solution development process across five major phases:

    [LIST=1]

  • Business case
  • Build vs. Buy Decision
  • Proof of Concept
  • Piloting
  • Commercial Deployment

    According to the paper, discussions with IoT experts revealed the following 5 best practices to develop secure IoT solutions:

    1. Use a security threats model to assess the attack surface
    Security can’t be done by the device or cloud alone. Rather, both must work together and with each component of the solution to reduce the overall attack surface area and keep the weakest link to a minimum. It is important to realize that one weak link can open up your whole system (e.g., hackers have gained access to entire company networks by simply entering the default device password for an IoT connected surveillance camera).

    Combining hardware and software solutions (i.e., cyber physical) that go from device to cloud and cover everything in between will enable more seamless security in IoT. OEMs /ODMs /device manufacturers need to understand that threats can come from a number of different areas and may be unknown initially; the STRIDE model outlines six possible threats to IoT.

    S.T.R.I.D.E. MODEL

    • SPOOFING IDENTITY e.g., attacker uses another user/device’s credentials to access the system
    • TAMPERING e.g., attacker replaces software running on the device with malware.
    • REPUDIATION e.g., attacker changes authoring info of malicious actions to log wrong data to log files.
    • INFORMATION DISCLOSUREe.g., attacker exposes sensitive information to unauthorized parties.
    • DENIAL OF SERVICEe.g., attacker floods device with unsolicited traffic rendering it inoperable.
    • ELEVATION OF PRIVILEGEe.g., attacker forces the device to do more actions than it is privileged to do.

    2. Implement security by design
    Security-by-design is a fresh approach that entails security experts, architects and engineers from each layer getting involved in full architecture design of an IoT solution right from the outset and to create a security development lifecycle (SDL). As outlined in our 2016 IoT platforms market report, thinking about security across the product lifecycle helps IoT developers build more secure software and address important security compliance requirements. Another innovation related to security-by-design is the involvement of an “attacker” performing penetration testing to assess the system and look for vulnerabilities in the product development process.

    3. Force yourself to think security from end-to-end
    IoT demands end-to-end security solutions that traverse the layers. A Senior Product Manager at a leading IoT cloud platform says “IoT Security must be consistent across the device OS, network, cloud and application.” Unfortunately, not all IoT systems are thought out from end-to-end. For example, in many cases identity verification is only available on the device level. However, if a hacker jailbreaked the device he/she could remove software restrictions imposed by the OS and permit root access to the file system allowing them to install untrusted applications on the device. In case of such a hardware compromise the other layers should also confirm authentication of device and user identity e.g., the cloud should know which device is compromised and restrict access to the network.

    4. Do not minimize security features to get the MVP out quickly
    Companies developing IoT solutions often want to get to market quickly and overlook the importance of building crucial security features into their minimum viable product (MVP) or even beyond. In many cases, it is up to the solution providers to make the customer aware of threats and push for security. However, with 360+ competing providers in the market today the competition is fierce and for companies the temptation to rush to market without the highest security level is unfortunately a reality.

    5. Design the system using proven industry best practices
    Thewhite paper outlines some industry best-practices of engineers building secure IoT Solutions including:

    • Employing hardware-based security such as TPM 2.0 to offer an additional root-of-trust.
    • Using unique identity keys associated with the device (flashed into the hardware trust module or using manufacturer IDs e.g., Intel EPID).
    • Shielding devices behind a gateway or firewall.
    • Enabling user-selected device IDs verified across the stack e.g., on OS, Edge gateway, Cloud.
    • Employing secure boot processes for malware resistance (e.g., only run secure signed images).
    • Using a cross-stack standards-based security approach, thereby making it easy to adopt, easy to adapt (with the standard) and easy to justify to the stakeholders.
    • Auditing and monitoring events and potential breaches in real-time, employing security analytics.
    • It is worth noting, if the hardware is designed with vulnerability the end-to-end solution may still be compromised. Thus, it is important to not only look at the software security aspects but also the hardware aspects e.g., root-of-trust chip security, board-level protection and anti-tamper measures.
    • For more details on developing secure IoT solutions as well as other best practices for OEMs, ODMs, and device manufacturers check out our “Guide to IoT solution development” white paper which is available for download free of charge.

  • Adding DSP hardware shrinks energy for MCU core

    Adding DSP hardware shrinks energy for MCU core
    by Don Dingee on 10-14-2016 at 4:00 pm

    ARM’s Cortex-M4 processor core represented quite a breakthrough in digital signal controller technology when launched in 2010. Adding a single-cycle multiplier and SIMD instructions enabled basic DSP algorithms while retaining the low power benefits of an MCU. New technology circa 2016 – embedded programmable logic – can extend the Cortex-M4 or other core for the same DSP operations using significantly less energy.

    Flex Logix has published a new case study in presentation format exploring performance and power consumption of a stock ARM Cortex-M4 in TSMC 40G versus the same algorithms offloaded into EFLX embedded programmable logic tiles. For the comparison, EFLX figures are from TSMC 40ULP (with comparable dynamic power), and leakage is nullified with power gating. The study also takes out memory access overhead for the Cortex-M4, assuming instructions and data are cached.

    Similar to many MCU applications, the crux of this argument is reducing energy per unit of algorithmic work. Shortening the bursts of active computation and allowing functional blocks to be power gated more often results in an overall energy savings and longer device battery life. Rather than using a complete DSP core and C programming, the EFLX configuration can be tuned in RTL for the exact algorithm at hand. (Several posts have introduced the EFLX technology – navigate to FPGA > Flex Logix to see the previous discussions.)


    Conceptually, this is a similar idea to using a full-sized external FPGA for algorithm offload, but with major differences in power consumption. EFLX is an embedded FPGA, in the same process node as the MCU core alongside it. There are no high-speed transceivers, which are one of the big power hogs in an FPGA. EFLX reconfigurable building blocks (RBBs) and tiles have been optimized for fine grain clock gating, and the interconnect fabric is optimized with power gating – reducing leakage power some 36x.

    As we suggested in another post on IoT processing a few days ago, a fast multiplier is great for many applications, but it is insufficient for many others. To illustrate the differences, Flex Logix chose to study a 5 tap FIR filter and a single-stage BIQUAD filter, DSP algorithms that involve both multiplies and data accesses. The computations certainly can be performed on a Cortex-M4 alone – for the 5 tap FIR, 8080 clock cycles are required for 256 samples.

    The DSP version of the EFLX-100 tile provides 2 MACs and 88 LUTs. Tiles can be arrayed in up to a 5×5 configuration to get more multipliers and LUTs. For a 32-bit data, 16-bit coefficient version of the 5 tap FIR, 5 EFLX DSP tiles are required to get the required multipliers, and no additional logic is required with LUTs to spare. The 16-bit BIQUAD implementation needs only 3 EFLX-100 tiles. Both versions can be optimized at the RTL level for more efficient multiply sequencing.


    Keep in mind that RTL is synthesized using the Synopsys Synplify Pro engine, not some proprietary piece of magic. Gate level simulation for this study was performed in Mentor Graphics Questa, and power analysis done with Cadence Voltus, providing a level, reproducible playing field. Both the Cortex-M4 and the EFLX were run for 256 data samples. Since the EFLX-based hardware acceleration handles one sample per clock cycle, what was a sizable advantage in dynamic power for the Cortex-M4 is completely offset by extended numbers of cycles to perform the same function. Again, the Cortex-M4 power doesn’t include any memory access.


    The energy delta is massively in favor of the EFLX configuration. For the 32-bit 5 tap FIR, has a 1.75x advantage; for a 16-bit filter, that jumps to 4.76x. The 16-bit BIQUAD has similar results with a 1.49x advantage.

    EFLX tiles take only 0.13 mm[SUP]2[/SUP], so these implementations are not using up a lot of extra area. Leakage power can start to dominate at lower frequencies, but the simple solution is power gating when the EFLX-based hardware accelerator is not in use – and there is negligible wake-up overhead, unlike an MCU core that takes energy just to come out of sleep.

    Follow the link to the complete study presentation with all the background on the Flex Logix landing page (PDF, registration not required):

    EFLX: Energy Efficient Embedded FPGA for DSP Applications

    I don’t think Flex Logix is picking on an ARM Cortex-M4 per se. It’s just that the Cortex-M4 is extremely popular in wearable and IoT applications because of its computational punch and relative energy efficiency compared with other conventional solutions. The fact is any MCU-style core would probably have similar issues being asked to take on heavier DSP algorithms. The approach of adding a small chunk of DSP hardware (or more general purpose logic) with synthesizable, optimizable, power and clock gated embedded programmable logic while keeping the rest of the IP around a favorite processor core is quite compelling.


    Semiconductor C-level Executives Explore the Seventh Sense!

    Semiconductor C-level Executives Explore the Seventh Sense!
    by Daniel Nenni on 10-14-2016 at 12:00 pm

    The GSA US Executive Forum is in its 5[SUP]th[/SUP] year. It is a time for top level semiconductor executives to meet and try to make sense of a very complex and fast moving industry that has tremendous influence on modern day life. You can see a list of attendees with bios and pictures HERE. There is a lot to talk about (The Future of Drones and Cloud Robotics, The Truth about Deep Learning, Connected Vehicles in Smart Cities…) but let’s first talk about the location.

    The event is held at the Rosewood Sand Hill Hotel in Menlo Park for a reason. Sand Hill is the Silicon Valley version of Wall Street due to the concentration of venture capitalists who funded the majority of semiconductor companies that we all benefit from today. It is also a five star hotel on 16 acres of sweeping Northern California landscape and offers the best food on the conference circuit, absolutely!

    The content of this conference is a bit overwhelming but let me tell you this, the key takeaway is that the data flow to and from the cloud is growing exponentially so Intel and the entire cloud supply chain, specifically TSMC, will continue to post record growth numbers for sure.

    The most entertaining talk came from Dr. Peter Stone (Professor of Computer Science at the University of Texas at Austin) and his presentation on Autonomous Learning Agents. Dr. Stone used his RoboCup championship soccer team as an example. SoftBank (which acquired ARM) is also a RoboCup competitor so you can probably expect ARM to start talking more about robotics. These robots are connected through the cloud so they can work as a team. The goal is for robots to beat humans in soccer by 2025 but as you can see by these clips they have quite a ways to go:

    The international RoboCup community fosters the development of intelligent robots by defining and executing competitions that are used by scientists and students from all over the world to test and demonstrate their robots in attractive, realistic scenarios. See for yourself what RoboCup teams have achieved in the past 20 years. Meet more than 3,500 dedicated scientists and developers from more than 40 countries. Be inspired by the contests, and become part of the RoboCup network.

    For those of you who, like me, grew up with the Jetsons, the robotic revolution should not surprise you at all.

    The keynote was based on the book “The Seventh Sense” by Joshua Cooper Ramo and was absolutely incredible. I am reading his book now but I do not have the space here to do it justice so I found a clip that does:

    Endless terror. Refugee waves. An unfixable global economy. Surprising election results. New billion-dollar fortunes. Miracle medical advances. What if they were all connected? What if you could understand why?

    In this groundbreaking new book, Joshua Cooper Ramo examines the historic force now shaking our world – and explains how each of us can master it. The Seventh Sense won’t merely change the way you see the world. It will also give you the power to change it.

    Also read: Robots could eventually replace soldiers in warfare. Is that a good thing?


    eSilicon Just Made It Easier to Explore Memory Tradeoffs

    eSilicon Just Made It Easier to Explore Memory Tradeoffs
    by Bernard Murphy on 10-14-2016 at 7:00 am

    If you are building an advanced SoC, you know that you’re going to need a lot of embedded memory. Unless this is your first rodeo, you also know that which memories you choose can have a huge impact on Power, Performance and Area (PPA) and, for some applications, Energy (power integrated over time), Temperature and Reliability. Which makes selecting the optimal memories for your objective a pretty important part of architecting the design and optimally using those IPs in implementation.

    eSilicon Webinar:

    Browse and Buy Semiconductor IP Online

    If you are working with an ASIC company like eSilicon, you also have foundry options. Since memory macros and compilers are foundry-specific, in your planning you’ll want to use memory models for your selected foundry (and explore the options offered by that foundry). eSilicon recently announced an extension to their STAR Navigator tool to provide automated, online quoting and purchasing capabilities for memory IP and I/O libraries from their IP design team. I thought it would be interesting to check it out in the context of this objective.

    I started with Try and Buy IP, which took me to the Compare screen above. I was mostly interested in comparing PPA for various options, so chose to “Browse system instances” (so I didn’t have to create my own instances). This takes you to a screen where you can select options you want to compare (you may need to get approval to look at some options if you are just tire-kicking).

    I chose to assume that I was already committed to TSMC and 28nm and I wanted to take a look at 2-port register files. Making selections automatically adjusts options in lower rows so you only get to pick from available options. Some of the options could use slightly more explanation (Mike Gianfagna, VP of marketing at eSilicon told me this is being improved) but aren’t really that difficult to figure out – NW: number of words, NB: bits per word and CM: column muxing. I chose “ALL” on these options to see how PPA was going to vary as a function of these parameters.


    Once you’ve finished a selection, hit “Find Instances” below and a configuration line appears below that, then click on “Add to Compare”. To compare I needed to select some other configurations, so clicked on “Add more instances”. I tried adding the same configuration, at a Typ-typ corner (the first was Fast-fast), and an HPM implementation at fast-fast (the first was HPC).


    When I had selected each of these instances, I could easily run comparisons on various parameters. First I went for read power versus area (above). Since the largest configurations have the largest area, it’s not too surprising that dynamic read (DR) power rises with area. I can also see how each of the process choices behaves with respect to the others. The specific process detail has been removed to protect confidentiality. You can see all the details when running the tool, however. I can generate similar plots for dynamic write and leakage power. By the way, you can also hover over any point to see a detailed breakdown for that instance.

    Now the performance part of PPA. Above is the plot of DR power against frequency. This didn’t require any change in configuration setup; I just selected frequency as the X-axis. Unsurprisingly, the highest frequency instances are for the smallest memories. You can see that by hovering over the data points and examining that particular memory configuration.

    You can play around with these graphs in a bunch of different ways, plotting say leakage power versus total bits. You can also dump results out to a CSV file so you can do more detailed analysis in your architectural modeling.

    When you’re done, you can select one or more memory IP instances and generate full CAD models (Verilog, VHDL, .lib and so on). That gets you up and running with memory models (download from the “My IP” tab) in your simulation and you haven’t committed a dime yet.

    Finally, few things in life are free, so time to find out what using this memory instance will cost you. Select the instance, add it to your shopping cart, then click on the cart logo. Just like Amazon! Of course IP like this is a bit pricier than an Amazon purchase; a 256×8 bit 2-port register file came in at over $50k. But that’s a drop in the investment bucket for the mobile applications in which these IP are commonly used.

    Check out the eSilicon Navigator Try and Buy site to explore memory IP options. It’s a pretty useful capability to help you figure out some of the key decisions you’re going to have to make about your design. You can start HERE.

    More articles by Bernard…


    Circuit Simulation Videos Show How To

    Circuit Simulation Videos Show How To
    by Daniel Payne on 10-13-2016 at 4:00 pm

    One of the things that I miss most about attending trade shows like DAC in the old days was that you actually got to see EDA tools being demonstrated live in the exhibit area. You could see what the GUI looked like, how the dialogs worked, and learn what kind of control you could have during analysis. Most of what you see today at DAC in the exhibits are Powerpoint presentations, nothing really live anymore. Now that we have the Internet and video capabilities a few EDA companies are bringing back the concept of actually showing you what their EDA tools can perform, step by step, dialog by dialog. I’ve just watched a series of four videos at Synopsys covering the usage of their Simulation and Analysis Environment (SAE), something that any circuit designer would benefit from watching.

    Advanced Testbench Setup

    In the diagram below the purple rectangle area shows the Simulation and Analysis Environment; in the design flow it’s where you read in a netlist, choose the types of analysis that you want to perform, select which outputs to view, define any measurements, add specifications to verify that your circuit meets a criteria, launch a circuit simulator, then start analyzing the results of simulation.

    The GUI looks intuitive with areas for parameters, analysis, outputs and testbenches:

    Watch Video 1

    Managing Simulation ECO Flow

    The second video builds upon concepts from the first one, then shows how to run a circuit simulation and then take the operating point analysis results and annotate the original netlist:

    The demo shows how to change the W/L sizes in an adder circuit using the GUI, update parameters, change the testbench, then rerun the circuit simulation in just seconds.

    Watch Video 2

    Managing Multiple Testbenches

    Most circuits require multiple testbenches, so this third video shows how to: clone a testbench, do remote job distribution, job policy setup, and hierarchical job monitoring. A second netlist with post-layout extracted parasitics is added to the demo circuit, then the waveform results are compared with pre-layout results:

    Cloning a testbench can be used when you want to compare HSPICE versus CustomSim results on the same circuit. For the demo adder there were two testbenches using HSPICE and two testbenches using CustomSim, and all four testbenches were run in parallel while you could watch the status of simulations. You can even setup where your simulations will take place, using LSF or SGE. The job policy setup can define that long simulation runs like a PLL use a remote machine, while smaller circuits use the local host.

    Watch Video 3

    Running Parametric Sweeps and PVT Corners

    Circuit designers often want to vary parameters like load capacitor values or run PVT corners. The GUI lets you quickly define these sweeps and corners. Multiple simulations were submitted to both HSPICE and FineSim circuit simulators, and the simulation results were displayed in tabular format. The GUI to define PVT corners only takes a few clicks to setup:

    24 jobs were submitted to HSPICE and 24 jobs to FineSim on a local host, then the hierarchical job monitor displayed all 48 of these jobs as they completed. Measurement results are easy to visualize across PVT corners where a green color shows a measurement that met specification, while red color shows a measurement that didn’t meet spec:

    Watch Video 4

    Summary
    Through just four videos you get a very quick and thorough introduction on how to setup, control and analyze your circuit simulations through this new GUI called the Simulation and Analysis Environment. If you’re not familiar with SAE, then I recommend that you watch them in sequence, and each video is 15 minutes in length. There is a short sign-up form that you register with.