CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Does IoT need Sensor Fusion? Yes, but at low-power, low cost…and higher performance

Does IoT need Sensor Fusion? Yes, but at low-power, low cost…and higher performance
by Eric Esteve on 01-27-2016 at 12:00 pm

We said this in the past, but let’s reiterate that IoT devices will be successful if they can meet low-cost and low-power requirements. Low-cost is the condition for IoT devices market penetration, I mean such a market adoption that we count several IoT systems (and dozens of devices) in every house. That’s the only way to reach the forecast predicting 20 billion of devices by 2020 and the condition translate into an IoT edge device cost below $10, not above $100 as we can see it today. The low power condition also obliges to completely rethink the device architecture and optimize the various existing IP implementation to create more efficient sub-system IP.

The move from single sensor application to voice and multiple sensors, known as sensor fusion, requires higher levels of signal processing, driving higher DSP performances. Moreover, user interface developments also require higher levels of signal processing. At the same time users are demanding more complex features, they expect devices with longer battery life. Battery operating for only a few days used to be OK in the past, but users expect now an IoT device to stay active two weeks between charging. That’s why Synopsys has completely re-architecture their Data Fusion Subsystem (pictured below). Such IoT SoC or Microcontroller should be powered with AA/coin or Lithium battery, support Bluetooth, 802.15.4, Zigbee or WiFi wireless connectivity. Technology and IP needs can be complex as the SoC may have to embark NVM (Flash or MTP), support various interfaces and provide high level of security. If the design can target 180 or 90nm to validate the IoT concept, high production level will require moving to 65, 55 or 40nm technology node.

Synopsys launches DesignWare Smart Data Fusion IP Subsystem integrating latest ARC EM9D and EM11D processors for highly-efficient DSP performance. This subsystem integrates a microDMA controller providing 4x faster access times, smart enough to decrease system-level energy consumption by enabling data transfers during processor sleep modes. Just take a look at data processing analysis:

  • EM core is in sleep for data collection
  • EM core and AHB Bus are inactive during data storage
  • Data access is 4x faster for processing
  • Overall sensor data processing exhibit a much lower power compared with a typical system accessing memory and DMA over the Bus.

Implementing hardware accelerators like Fast Math functions or a crypto pack (AES/SHA/3DES), tightly coupled memory and peripherals allow reducing power consumption by up to 85 percent compared to discrete solutions. ARC EM DSP cores have been designed for ultra low-power control and DSP efficiency, thanks to an energy-efficient 3-stages RISC pipeline, unified single cycle 32×32 MUL/MAC unit and dedicated, energy efficient signal processing of voice/speech, audio and sensor data. Synopsys has implemented XY memory to increase DSP performance and provide software drivers and an extensive library of DSP functions such as FFT and DCT, FIR and IIR filters to speed application software development.

Synopsys can propose a demo platform developed in collaboration with Renesas, including MonArch MCU running @ 200 MHz, 3 megapixel camera for image/gesture and Bluetooth supporting wireless audio output and run-time communication to tablet. The platform supports MP3 playback and SD card for audio storage.
Several applications can be demonstrated, like voice activation & control using Sensory’s TrulyHandsfree S/W, 9D positioning & activation based on Hillcrest Labs’MotionEngine S/W and Face detection and gesture control interface using Synopsys proprietary demonstration code.

Availability

The DesignWare Smart Data Fusion IP Subsystem will be available in February 2016.
Learn more about the Smart Data Fusion IP Subsystem: https://www.synopsys.com/dw/ipdir.php?ds=smart-data-fusion-subsystem

From Eric Esteve from IPNEST

More articles from Eric…


Inception of "Intel Inside"

Inception of "Intel Inside"
by kunalpghosh on 01-27-2016 at 7:00 am

Let me start with a quote:

“Competition is always a good thing. It forces us to do our best. A monopoly renders people complacent and satisfied with mediocrity.”– Nancy Pearcy

By the end of post, it will be quite evident that it was the competition that led to one of the classiest campaign “Intel Inside” for its processors. As I mentioned in my previous post about Japanese taking the lead over DRAM market, most American companies felt the threat, and Motorola was one of them[SUP][1][/SUP]. This started in 1981 – Six sigma program.

Below table will help understand sigma level and what it means in terms of “number of defects”.


Nevertheless, Japanese also started similar program, though not as good as Motorola, but were still ahead due to ‘past perfections’. All said and done, this program helped US in preventing quality problems and Japanese got beaten in their own game of ‘quality’.

That was not all: In 1987,SEMATECH (a non-profit consortium) was formed, that brought together US semiconductor manufacturers, chipmakers, material suppliers to address technical challenges, reduce manufacturing defects and costs
[SUP][2][/SUP] –This was one of Japanese popular route to success.

Intel was the first companies to market memories that had allowed Intel to charge high. But after facing threat during 64K market from Fujitsu (who entered into 256K market very quickly), Intel decided to exit DRAM industry by 1986 and focused into micro-processors, for which Intel were (and are still) pioneers.
[SUP][3][/SUP]

By 1981, IBM first microcomputer used Intel 8088 with a clock frequency of 4.77MHz
[SUP][4][/SUP](although, in1971, Intel had introduced 1[SUP]st[/SUP] commercial microprocessor). This was a huge success and it became Intel’s primary business.

If that’s not all, NEC (who in1979, used to manufacture Intel’s 8086/88 microprocessors), used their designs to create its own Intel-compatible microprocessors and called them NEC V20 andV30
[SUP][5][/SUP] While this was allowed, but a NEC software engineer disassembled Intel 8086/88 microcode. This led to a series of court cases, and Intel claimed that microcode be copyrightable. Eventually, it was not proved in court of Lawand Intel lost the copyright case to NEC in 1989.[SUP][6]
[/SUP]
But guess what happened next: In1991, “Intel Inside” campaign was launched. It represented Intel’s way of directly communicating with computer buyers about its “quality and reliability”. Ever Wondered what was the result of this kind of advertisement. It was stunning!!


A good example of this is, would you buy a PC without the logo “Intel Inside”?Intel co-op marketing programmes Director, Jami Dover, mentioned in 1997, about US$3.4 billion had been spent on marketing and advertisements by Intel and PC makers, since 1991
. The result of this is also evident in below chart

By 2007, Intel was leader in semiconductor market with 12%, while overall US semiconductor share was about 49% [SUP][7][/SUP]

So its confirmed “Competition brings out the best in products”!!

On a side note, the reasons mentioned above, were not the only one’s that led to sharp decline in Japan semiconductor share. There were others too, like decline in investment, Samsung entry to DRAM market, too much dependency on home market, etc. which will a point of discussion in next post.

Notes:


[LIST=1]

  • “The Motorola six-sigma story” can be accessed at http://www.managementstudyguide.com/motorola-six-sigma-story.htm
  • “Lessons from SEMATECH” can be accessed at http://www.technologyreview.com/news/424786/lessons-from-sematech/
  • Intel” can be accessed at http://avierfjard.com/Machine%20Language%20and%20Assembly%20Language%20Programing/Exiting%20The%20DRAM%20Market.pdf
  • “Intel 8088” can be accessed at https://en.wikipedia.org/wiki/Intel_8088
  • “NEC vs INTEL: Breaking new ground in the law of copyright” can be accessed at http://jolt.law.harvard.edu/articles/pdf/v03/03HarvJLTech209.pdf
  • “Intel loses copyright case to NEC” can be accessed at http://www.nytimes.com/1989/02/08/business/intel-loses-copyright-case-to-nec.html
  • “The US semiconductor industry” can be accessed at http://www.semiconductors.org/clientuploads/Industry%20Statistics/2014%20Factbook%202.0%20-%2002032015.pdf



  • Smart TV Chipset: 4 Key Takeaways from Interconnect IP

    Smart TV Chipset: 4 Key Takeaways from Interconnect IP
    by Majeed Ahmad on 01-26-2016 at 4:00 pm

    The ultra high-definition (UHD) or 4K TV hardware is leading to insanely powerful chipsets in the age of Netflix, and that is taking the system-on-chip (SoC) design to a whole new level of complexity. Take the case of Samsung’s new chipset for SUHD TVs that boasts more than 100 IP interfaces.

    Here, apart from the usual suspects like CPU, GPU, memory subsystems and peripherals, the new IP offerings include image processing, overlay and picture-in-picture (PIP) functions. Samsung’s new chipset—powering select models of its high-end LCD TVs with 4K resolution—requires greater bandwidth efficiency because it has to process more information per pixel.


    Samsung has licensed Arteris FlexNoC IP for use in select smart TV models

    Not surprisingly, therefore, the image and vision processing capabilities equipped with computationally intensive algorithms are going to play a critical role in these SUHD TV models. The image and vision processing features also go hand in hand with application processing tasks that are tied up to the open-source Tizen operating system.

    Samsung’s new SUHD TV models—unveiled at the 2016 CES—are powered by the Tizen OS, which supports web standards for TV app development and can download and install apps on Samsung’s smart TVs just like mobile phones. It’s worth noting that Samsung uses the “SUHD” marketing term for its 4K LCD TVs.

    Interconnect IP: Key Takeaways

    Coming back to Samsung chipset and more than 100 IP that it supports, the Korean semiconductor giant is using the Arteris FlexNoC interconnect fabric to ensure optimal connections that serve as the backbone in this large and powerful chipset. The FlexNoC on-chip interconnect will help Samsung reduce die size and cost as well as meet the conflicting bandwidth and latency requirements.

    Here are the key takeaways that SoC powerhouse Samsung seems to have drawn from using the Arteris’ FlexNoC interconnect backbone.

    1. Debugging Features

    The Arteris FlexNoC technology offers on-chip debug visibility, trace and statistics collection. The observability—which comes with the debugging features of FlexNoC interconnect IP—is software programmable and it encompasses probes, trace and performance counters.

    Furthermore, the on-chip debug visibility is integrated with debug infrastructure and tool chains, including ARM and Lauterbach. Next up, timeout and watchdog features in FlexNoC help in the challenging situations, for instance, when an individual IP block hangs.

    2. Multiprotocol Support

    The multiprotocol support in FlexNoC interconnect IP facilitates the use of multiple standards defined by different entities such as ARM AMBA and OCP. That plays a vital role in creating a large and complex chip like the one used in Samsung SUHD TV while reducing power consumption and die area.

    Arteris has recently expanded its support for the ARM Microcontroller Bus Architecture (AMBA) protocols to confront the rising amount of complexity in SoC designs. Arteris aims to provide support for the current and future versions of AMBA standards such as AXI, AXI Coherency Extensions (ACE) and Coherency Hub Interface (CHI).


    FlexNoC IP eases communication bottlenecks among subsystems in large SoCs

    3. Quality-of-Service (QoS)

    Samsung’s SUHD TVs are striving to become the Internet of Things (IoT) control hub while allowing consumers to discover and access a vast amount of content ranging from movies to games to TV programs. Here, bandwidth boost and latency control are crucial in order to ensure superior picture quality in Samsung’s smart TVs with the 4K display.

    The FlexNoC interconnect fabric from Arteris ensures on-chip data flow with concurrent bandwidth and latency mechanisms. That allows large chipsets to transmit master/initiator QoS information throughout the interconnect all the way to the target.

    4. 4K Resolution

    If bandwidth defines smart TV’s ability to deliver TV apps, the shift from 1080p to 4K hardware sets the tone for higher resolutions a.k.a. more pixels. The hardware for ultra high-definition or 4K resolution brings more color pixels and higher frame rates, but at the same time, the imaging and vision subsystems add more strains to the overall SoC fabric.

    A chipset with a robust interconnect backbone can efficiently adapt to the algorithms for color enhancement and resolution boost. Arteris’ FlexNoC on-chip IP backbone preps 4K chips to better handle these complex algorithms without raising the power consumption and size of the chipset.

    Also read:

    Interconnect Watch: 3 Chip Design Merits for Network Applications

    Rockchip Bets on Arteris FlexNoC Interconnect IP to Leapfrog SoC Design

    Is Interconnect Ready for Post-mobile SoCs?


    Are You Ready to Wear Your Own Device (WYOD) in 2016?

    Are You Ready to Wear Your Own Device (WYOD) in 2016?
    by Pranay Prakash on 01-26-2016 at 12:00 pm

    For IT administrators it has never been so complex yet interesting. Until about 6-8 years ago, mobility in workforce meant supporting company equipment such as corporate designated laptops and Blackberry phones. And it worked – IT organizations and employees realized the power of mobility and how having corporate mobile assets enhanced their productivity. Over the past several years there has been a proliferation of mobile devices and consumers have choice and in many cases have multiple devices at their disposal. I personally have both iOS and Android devices in different form-factors for personal and family use. And then I have my super durable workhorse laptop and a smartphone from my company. As other techies, I can justify the existence of each of them 🙂

    Proliferation of Smartphones
    Earlier this year, IDC forecasted Smartphone growth being the fastest with slower growth in the PC and tablet markets. As per IDC, “Detachable 2-in-1s show strong growth potential in tablets, and convertible notebooks are beginning to gain traction in PCs. But ultimately, for more people in more places, the smartphone is the clear choice in terms of owning one connected device.” I am sure PC manufacturers will want to argue with such forecast but it is a growing reality. All you need to do is look around you at an airport, train station or any public place – Smartphones are everywhere. I am pretty social media savvy and mostly get my news, linked-in, twitter feeds on my Smartphone. Enterprise IT organizations have been keenly observing the growth of Smartphones and some of them see an opportunity in this paradigm.


    Bring Your Own Device (BYOD)
    In 2009-10 timeframe, BYOD emerged as an enterprise IT initiative to allow employees and partners to use a personal purchased device of choice e.g. a Smartphone and access applications authorized by IT. As expected, IT organizations were not thrilled initially as there were concerns around security, upgrades etc and general complexity in managing these devices. However the benefits of BYOD overweigh the complexities. Besides cutting costs of devices, enterprises can now expand their mobile workforce and enhance productivity. Most new applications including CRM etc are developed in a ‘mobile first’ and ‘cloud first’ environment and enterprise IT can help the workforce access these applications faster through BYOD and focus less on qualifying and supporting a new corporate owned mobile device.

    It also means the significance of cloud increases greatly as a medium to deliver these applications. Microsoft, VMware etc are all trying to enable the new ‘Cloud-Smartphone’ ecosystem. BYOD adoption is increasing and as per Gartner in a May 2015 press release – by 2017, half of employers will require employees to supply their own device for work purposes. Pretty bold statement! Assuming BYOD gets adopted at the rate Gartner predicts, what’s next enterprise IT needs to prepare for?

    Wear Your Own Device (WYOD)
    WYOD is not a widely used term and some can argue it is a subset of BYOD – regardless it is something that cannot be ignored. The significance of WYOD is in extending use cases beyond IT providing/allowing access to enterprise and consumer applications. WYOD is about enabling Internet of Things (IoT) use cases. Commercially, there are many applications in healthcare, military, supply-chain, factories etc that can benefit from WYOD. While the first Google Glass wasn’t a commercial success, next generation of such devices combined with fast, affordable and reliable wireless connectivity, ease of use and new IoT applications will drive greater adoption. Apple Watch, Fitbit and other wrist wearable devices are already being used for smarthome and personal health applications. We will see growth in commercial adoption as developers write new IoT applications for wearables.


    Wearable technology adoption is surely on the rise and while it is not forecasted to be faster than Smartphones, this category is emerging as the next biggest connected device category. IDC is forecasting 155.7 million units to be shipped in 2019. And a lot of us will be wearing these. In full disclosure, I don’t have a wearable device on me but I am sure I will find a justification to own one in 2016 🙂

    How about you? Are you going to Wear Your Own Device (WYOD) in 2016? If you’re in IT, are you going to incorporate WYOD in your portfolio?


    Demystifying Cisco’s Five Pillar Innovation Strategy

    Demystifying Cisco’s Five Pillar Innovation Strategy
    by Patrick Moorhead on 01-26-2016 at 7:00 am

    Large companies that are the leaders in their industry generally have a hard time maintaining the entrepreneurial spirit that got them to their leading position in the first place. On one hand you have relatively “new” companies like Facebook who keep growing and on the other, you have companies like Yahoo who are struggling in the new age of mobility, social and the emerging IoT wave. For a company with a market cap of over $130 billion, it is hard for a company like Cisco Systems and its peers to continue to innovate as they did when they first started.

    Cisco Systems CEO Chuck Robbins responds to questions at 2015 WSJD Live on October 20, 2015 in Laguna Beach, California. FREDERIC J. BROWN/AFP/Getty Images

    While innovation isn’t necessarily about structure, when it comes to large companies, you have to have some order around it or it just looks like executives and employees wasting time and having fun. Bean bag chairs, open offices, Starbucks and fine dining at work has to be balanced out with clear goals and accountability. That is why Cisco has developed a multi-faceted and structured innovation strategy to ensure that the innovative spirit is kept alive while having accountability.

    Cisco’s strategy is structured into 5 pillars. Those five pillars are what you would expect if you’ve been engaged in any part of product or corporate strategy- build, buy, partner, invest and co-develop. While these pillars may not be unique, I believe the manner and comprehensiveness that Cisco Systems has executed on them is. I’ve spent some time drilling down into Cisco’s innovation engine and want to share some of my findings.

    Build

    To foster the culture of innovation, Cisco first and foremost has to build products and create their own IP (intellectual property). Cisco has over 70,000 employees and of them, 25,000 are engineers which ends up requiring $6.3 billion in R&D. While innovation success isn’t only determined by how much you spend, but rather by what you spend it on and what you do with it, the $6.3B dwarfs their networking rivals Juniper Networks, Brocade Communications Systems, Palo Alto Networks and Arista Networks. This amount of R&D spend indicates the seriousness that Cisco puts on maintaining market leadership and creating new and innovative products.

    Cisco has 35% of their engineers working in an agile development environment

    The company strives to breed a culture of innovation by implementing certain practices that foster innovation. Some of those practices include having 35% of their engineers working in an agile development environment and generally working on smaller, more agile teams. They also do this by supporting what are called “Alpha” projects that facilitate disruptive new technologies that drive innovation. As a result of these efforts, the company has been able to generate 19,000 patents globally, with 12,000 of those coming in the US.

    Cisco also operates a technology investment fund which has already funded 18 projects. Such efforts are common among large tech companies as a way to influence smaller startups that could be potential acquisition targets as well as drivers of demand for their existing products.

    Buy

    That leads into the next part of Cisco’s innovation strategy, acquisitions and direct investments. Since 1993, Cisco has made 180 acquisitions which the company says has provided 1 to 2% of the growth to the company’s net profit margin. Cisco has acquired countless companies including WebEx, the well-known collaboration platform for webinars and teleconferences that runs on commodity hardware, as well as Meraki, considered one of the leading innovators in cloud-based wired and wireless networking hardware.

    Since 1993, Cisco has made 180 acquisitions

    These acquisitions are what allow Cisco to not only gain innovation in relevant verticals, but they also allow Cisco to acquire key talent that can allow them to innovate and continue to grow. The wide spectrum of acquisitions also means that Cisco already offers a diverse set of products to their customers and partners to enable them to create entire solutions that solve real business problems. The enablement of key strategic partners is actually an extremely important aspect of Cisco’s acquisitions and actually leads in to the next pillar of the company’s innovation strategy, and possibly the most important one.

    Partner

    As a part of Cisco’s innovation strategy, the company relies heavily upon creating solutions with technology and services partners. This comes directly from Cisco’s leadership in networking. These partnerships are vast and cover a broad spectrum of companies, which in 2014 amounted to $43 billion of Cisco’s annual revenues, or just under 90% of the company’s revenue.

    $43 billion (90%) of Cisco’s annual revenues come from partnering

    Some of these partners include some of the biggest enterprise solutions providers in the world like Accenture, CA Technologies, Citrix, EMC, Fujitsu, Hitachi Data Systems, IBM, Intel, Microsoft, NetApp, Oracle, Samsung, SAP, VMWare and many others. With these titans of the IT industry as partners, Cisco has over 700,000 channel resellers and engineers as partners. Some of these partnerships grow into closer relationships that eventually mold the companies together into joint development relationships.
    The next two elements of Cisco’s innovation structure are, to me, the most exciting.

    Invest

    Sometimes Cisco doesn’t quite have a partnership with a company yet, but finds their technologies potentially beneficial for the industry’s or their own growth. Sometimes they want to hedge their bets in case the market or technology goes in a certain direction. In these cases, Cisco has created a $2 billion fund for investing into companies both large and small across the world.

    Cisco invested in over 100 companies and funded 44 investments in 25 countries

    To date, Cisco has invested in over 100 companies around the world and has funded 44 different investments in 25 countries. This aspect of Cisco’s innovation strategy perhaps may be the most overlooked, but in many cases these investments can allow companies to grow into partners that utilize Cisco technologies and solutions or facilitate the demand for them.

    Co-develop

    As a part of Cisco’s innovation culture, the company also participates in a lot of co-development with over 300,000 developers. The company’s goal is to reach a million developers by 2020 and 6,000 applications in one year. These co-development practices include Cisco’s Entrepreneurs in Residence Program, or EIR for short. The EIR program is a startup incubator that usually lasts for 6 months and is focused on helping startups to create disruptive technologies that help Cisco and the industry to innovate and grow. Some of these startups, 27 in total, get acquired by Cisco or others in the industry and help develop long-term relationships that facilitate Cisco’s strategy for innovation.

    Cisco’s goal is to reach a million developers by 2020 and 6,000 applications

    In addition to their EIR program, which has a footprint in 5 cities in 2 regions, Cisco has 9 IoE (Internet of Everything) Innovation Centers. These IoE centers are in tech hubs like London, Rio De Janeiro, Korea, Barcelona, Sydney, Tokyo and Toronto. One of the most mature IoE centers is the Innovation Digital Enterprise Alliance (IDEA) national incubator in London, which is designed to drive UK entrepreneurship and innovation in the areas of sensors, M2M, energy, transportation and IoE startups. I recently visited the IDEA incubator center on one of my most recent trips to London, which has a healthy 16 to 18 startups on-site at any given time and are driving business for Cisco.

    Wrapping Up

    I attribute all of this detail and information on their innovation program to Cisco’s new-found openness about how they innovate as a company and their desire show the industry how broadly connected the company is and how much better they play well with others. Cisco deserves some credit for the transparency, structure and breadth of their innovation strategy. Overall in the last six months, I have noticed a dramatic shift, an improvement, in how the company communicates and interacts with the outside world.

    From my vantage point, the company now understands the opportunities and threats that await them in the future. The biggest opportunities are the software-defined data center and IoT, and the biggest threats are open networking. I have to give some credit to the new Cisco CEO Chuck Robbins. I have followed Cisco for decades and the biggest changes I noticed were when he came into power. And that’s a really good thing for Cisco.

    To give a deeper look into how Cisco innovates and facilitates innovation with startups, I will be giving a deeper dive into London’s IDEA incubator and how Cisco is working with IoE startups to help change the industry and add to Cisco’s business growth. I spent hours with the center’s directors and the employees of startups housed at London IDEA.

    More from Moor Insights and Strategy


    Why IoT Security is a Market for Lemons

    Why IoT Security is a Market for Lemons
    by John Moor on 01-25-2016 at 4:00 pm

    Concerns around the security of connected devices are continuing to rise. This is illustrated by July’s issue of The Economist where there are two articles on this theme outlining the perils of connected devices in the home, and more generally amongst the Internet of Things.

    In “Home, Hacked Home: The perils of connected devices” (http://tiny.cc/EconomistHackedHome) the author outlines how a Foscam was used by a hacker to shout obscenities at a sleeping baby. This was not the only occurrence. The company responded by upgrading its software and encouraged users to change the default password. Problem solved… or is it?

    In another, “Hacking the planet: The internet of things is coming. Now is the time to deal with its security flaws” (http://tiny.cc/EconomistHackingThePlanet), the publication explains that “computer security is about to get trickier” when “a world of networked computers and sensors will be a place of unparalleled convenience and efficiency” as IoT drives towards those lofty figures of connected things (measured in billions of course).

    Is it a disaster in the making? It could be unless all stakeholders play their part in making sure that systems are made secure, and kept secure over time – because “things” will likely have a lifetime much greater than a PC or mobile phone. The Economist suggests, not billions, merely three things, which will make IoT less vulnerable: basic regulatory standards, a proper liability regime & heeding lessons learned a long time ago.

    What else could go wrong? How about a bit of carjacking – of the wireless kind? Andy Greenberg of Wired recently published an article titled” Hackers Remotely kill a Jeep on the highway – with me in It” (http://tiny.cc/HackersRemotelyKillJeep ). Two hackers, known to Greenberg, had invited him to take a drive in the knowledge that they would be tampering with the vehicle – live. The advice they gave pursuant to their high jinks; “no matter what happens, don’t panic”. Scant advice when they disable your brakes before you slide into a ditch. All this is possible because a cellular connection allows access via the vehicle’s IP address (apparently). We’ll know more about how Miller and Valasek did this after their forthcoming Black Hat talk. Fear not though as there’s good news, Chrysler has released a patch on their website. There’s not-so-good news too: the patch must be manually implemented via a USB stick or mechanic (hmmm).

    Following on the automotive theme: At the recent IoT Security Summit hosted at Bletchley Park, Flavio Garcia from the University of Birmingham gave a talk entitled “Automotive Security: The Bad and the Ugly” (http://tiny.cc/iotsecuritysummit). Garcia starts by explaining that the context for automotive security is challenging, in part because of the right-to-repair legislation and complex supply chain issues. He then sets to challenge a particular semiconductor manufacturer on its claim that it had “unbreakable security” (a bold statement!) and further outlined how security had been implemented – rather poorly as it turned out.

    This brings us to the lemons. Garcia makes a very important point about security – when you cannot assess the quality of the product, buyers will tend to differentiate on price – i.e. the cheapest wins. In this regard, he compares automotive security as a market for lemons.

    Given the rise of column inches given over to matters of IoT security, it is clear that much more needs to be done to limit the opportunities for adversaries from hacking into our future connected world. The era of the internet of things is significantly different from the PC and mobile era’s and this time we have the chance to get ahead of the game. This endeavour may be challenged by an impatient profit motive however, and the desire to rush products to market. So, is there an acceptable compromise? There has to be. Businesses are starting to realise that this is no longer simply a technical issue – with reputations, profits and potentially so much more at stake, it’s a boardroom discussion. Yet whilst we determine what changes are necessary, we need to applaud the work of the researchers and the ethical hackers for helping us to see the problems more clearly. In this way we will accelerate our progress with significantly less pain than burying the problem.

    There are plenty of horror stories out there but if you want to see an amusing, yet serious take on product security, take a look at Ken Munro’s talk “The Internet of Thingies” http://tiny.cc/iotsecuritysummit.

    How can we address the issues? IoT security is a very complex matter. It will take a great deal of collaborative working on a global scale to make sure the totality of systems (technology, products, services, quality, regulation…) is fit for purpose. In September 2015, the IoT Security Foundation was launched after an intensive consultation period with security experts, product companies, researchers, service providers and more to look at ways to address the near and long-term issues. If you’d like to help us address IoT security issues and aid adoption rates, please join in at iotsecurityfoundation.org


    Steve Furber has found his million ARM cores

    Steve Furber has found his million ARM cores
    by Don Dingee on 01-25-2016 at 12:00 pm

    Some people say that everything in our lives happens for a reason. As we wrote Part I of “Mobile Unleashed”, the origin story of ARM architecture and its main progenitors Steve Furber and Sophie Wilson, we found what seemed like an obvious technological breakthrough was far from an overnight success – and it led to fascinating twists and turns. Continue reading “Steve Furber has found his million ARM cores”


    In Low Voltage Timing, the Center Cannot Hold

    In Low Voltage Timing, the Center Cannot Hold
    by Bernard Murphy on 01-25-2016 at 7:00 am

    When I started discussing this topic with Isadore Katz, I was struggling to find a simple way to explain what he was telling me – that delay and variance calculations in STA tools are wrong at low voltage because the average (the center) of a timing distribution shifts from where you think it is going to be. He told me that I’m not alone in my struggle – he’s never found an easy way to boil it down either. You just have to go through all the steps then the conclusion at the end makes sense. Therefore, with apologies to timing experts, here is my explanation. Throughout, I’m going to use “typical” for most common / mode / nominal value and “average” for mean.

    A Static Timing Analysis (STA) tool is really nothing more than an adding machine with a simple less-than/greater-than check when it hits a timing end-point, say a flip-flop. At the simplest level, it traverses paths starting from source flops, adding delays (from gates and interconnect) along those paths, until it hits destination flops. Where paths converge in-between those points, it keeps worst- and best-case delays (path-based analysis is more refined, but I think those details are not essential for this argument). Then it’s all about when the data can potentially get to a flop relative to when the clock can get to the flop. Too early and you have a hold time violation, too late and you have a setup violation.

    The timing values (typical values) come from library lookup tables indexed by gate-type, input slew and output load, and for models for the interconnect between gates. Back in the day, you would have tables for different process corners – slow/slow (SS) for slow NFET/slow PFET, fast/fast (FF), typical/typical (TT) and permutations thereof. You analyze in each of the corners, tweak the design to fix timing violations and all was good. But then it got complicated.

    At 40nm, margins represented purely as corners became too pessimistic to get reasonable yield at reasonable power, because statistical sampling across many lots from many designs buries different variances between different designs in the final variance, which is too pessimistic per design. Statistical timing analysis should have been the ideal solution but performance and other issues eventually killed that approach. So the foundries aimed for something that could support conventional STA methods with adjustments. They split measured variances into design-dependent variance (on chip variation, or OCV) and a design-independent part (the die-to-die variance) and called the latter “global”. That gives you corners called SSG, TTG and FFG. A design team must then add back in OCV variance based on the structure of their design to get the true variances they need to model. But they can’t just add/subtract the old-style 3σ to these these corners; that would be even more pessimistic than the traditional corners and the whole point is to minimize pessimism.

    So how do you calculate OCV? You still want to stick to single-pass analysis, but enhanced by different methods to approximate measured variances within those constraints. You can pick from AOCV, based on pre-characterized chains of gates to get variances at the end of the chain, or POCV or SOCV which in different ways compute variances at each stage in a path. (LVF is a recently introduced format which aims to combine representations for all these methods in one standard but does not prescribe how the calculation should actually be done.)

    What is important in all these methods is that you are propagating typical values as delays, but delay and variance calculated through these methods only serves as an accurate representation of the underlying distribution if that distribution is normal (Gaussian). If this assumption is reasonable (and it is at normal voltages), then as an input distribution passes through stages in a path, the average input delay to the next stage is the sum of the average delays up to that stage, because that’s how Gaussians sum.

    But when distributions are skewed, as they are at low voltage, something different happens. The sum of skewed distributions tends to a normal distribution as you pass through stages (thanks to the Central Limit Theorem) but at each stage the average of the distribution shifts away from the sum of the typical values up to that stage. This undermines the calculation based on typical values in two ways. First, the true average progressively moves to a value greater than the sum of typical values up to that stage. And second, the output slew lookup, which is now based on an incorrect delay value, is therefore also incorrect and this error also compounds. When you get to the end of the path to check setup and hold, the computed typical can differ from the true average by as much as 3σ for the distribution, as large as the amount you are trying to correct for with your OCV calculation. And that means on a path like this, the typical value adjusted by 3σ on one side could be extremely pessimistic and on the other side extremely optimistic.

    Some people argue this is a non-problem; that in fact these differences actually average out. That doesn’t seem very likely to me. The math of combining skewed distributions leading to a shift in the average is indisputable. Also gate timing distributions should always skew to longer delays since non-linearity near the switching threshold should favor longer delays rather than shorter delays at low voltage. There’s really no way these shifts can cancel out in a stage-based calculation. The AOCV approach could in principle get this right since it pre-characterizes chains of gates which should incorporate all effects, though apparently it doesn’t take account of slews so it’s still wrong. Not to mention that lookup tables for this approach could get rather large.

    Maybe you could fix the stage-based approach by using left and right variances at each stage to compute a shift at that stage, which you would then use to get the delay and slew lookup right. There have been attempts along these lines, though it’s not clear they have been very successful. Or more generally, you could model a skewed distribution using 3 points and evolve that along the path. This might be mathematically feasible, but I imagine there would be problems in performance. At minimum you’d have to do 3 divisions to scale this model curve to a (reasonably sized) lookup table so you could figure out the shift, then 3 multiplications to scale back, none of which is going to help run-time. And I don’t see any way you could emulate the correct behavior using only addition.

    The only way to do this correctly, at least along a set of paths of concern, is to do variance-aware transistor based modeling, either using MCSpice (which would be very slow) or CLKDA FX analysis (which is much faster). To get a more knowledgeable analysis of the whole problem and the FX approach, click HERE.

    More articles by Bernard…


    Your Car Will Never be Secure

    Your Car Will Never be Secure
    by Roger C. Lanctot on 01-24-2016 at 4:00 pm

    The automotive cybersecurity forum put on by the National Highway Traffic Safety Administration (NHTSA) yesterday in Washington, DC, surfaced a wide range of issues and conflicts at the heart of the connected car industry. One clear takeaway from the event was that cars will never be secure.
    Continue reading “Your Car Will Never be Secure”


    Star Wars, the Force and the Power of Parallel Multicore Processing

    Star Wars, the Force and the Power of Parallel Multicore Processing
    by George Teixeira on 01-24-2016 at 12:00 pm

    During the 80’s, the original Star Wars movies featured amazing future technology and were all about “the power of the Force.” The latest movie has now broken all box office records and got me thinking about how much IT and computing technology has progressed over the years but yet, there is still so much left untapped.

    Yes, several of the envisioned gains have come true – many of these driven by Moore’s Law and the growing force of the microprocessor revolution. For example, server virtualization software such as VMware radically redefined consolidation savings and productivity, CPU clock speeds got faster and microprocessors became commodities used everywhere – powering PCs, laptops, smart phones and intelligent devices of all types. But the full force and promise of using many microprocessors in parallel, what is now called ‘multicores,’ still remains largely untapped and I/O continues to be the major bottleneck holding back the IT industry from achieving the next revolution in consolidation, performance and productivity.

    Virtual computing is still bottlenecked by I/O. Just as city drivers can only dream about flying vehicles as gridlock haunts their morning commute, IT is left wondering if they will ever see the day when application workloads will reach light speed.

    How can it be that with multi-core processing, virtualized apps, abundant RAM and large amounts of flash, you still have to deal with I/O-starved virtual machines (VMs) while many processor cores remain idle? Yes, you can run several independent workloads at once on the same server using separate CPU and memory resources, but that’s where everything begins to break down. The many workloads in operation generate concurrent I/O requests yet only one core is charged with I/O processing. This architectural limitation strangles the life out of application performance. Instead of one server doing vast quantities of work, IT is forced to add more servers and racks to deal with I/O bottlenecks – this sprawl goes against the ‘consolidation and productivity savings’ which is the basic premise and driver of virtualization.

    All it takes, then, is a few VMs running simultaneously on multi-core processors churning out almost inconceivable volumes of work and you quickly overwhelm the one processor tasked with serial I/O. Instead of a flood of accomplished computing, a trickle of I/O emerges. IT is left feeling like the kids who grew up watching Star Wars who ask – where are our flying starships and when can we travel at light-speed?!

    The good news is that all is not lost. DataCore has a number of bright minds hard at work to bring a revolutionary breakthrough for I/O to prime time, DataCore Parallel I/O technology lets virtualized traffic flow through without slowdown. Its unique software-defined parallel I/O architecture is needed to capitalize on today’s powerful multi-core/parallel processing infrastructure. By enlisting software to drive I/O processing across many different cores simultaneously, this eradicates I/O bottlenecks and drives a higher level of consolidation savings and productivity. The better news is that this technology is already on the market today.

    Just like Star Wars has shattered the world record, check out how DataCore recently set the new world record on price-performance and on a hyperconverged system (on the Storage Performance Councils peer reviewed SPC1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever and so while the numbers do not actually reach light-speed, DataCore has lapped the field not once but multiple times. See for yourself the latest benchmark results in this article that appeared in Forbes: The Rebirth of Parallel I/O.

    How? DataCore’s software actively senses the I/O load being generated by concurrent VMs. It adapts and responds dynamically by assigning the appropriate number of cores to process the input and output traffic. As a result, VM’s no longer sit idle waiting on a serial I/O thread to become available. Should the I/O load lighten, however, CPU cores are freed to do more computational work.

    This not only solves the immediate performance problem facing multi-core virtualized environments, it significantly increases the VM density possible per physical server. It allows IT to do ‘far more with less.’ This means fewer servers or racks and less space, power and cooling are needed to get the work done. In effect, it achieves remarkable cost reductions through maximum utilization of CPU cores, memory and storage while fulfilling the productivity promise of virtualization.

    You can read more about this in DataCore’s white paper, “Waiting on I/O: The Straw that Broke Virtualization’s Back.”