In his recent blog on EETimes, Kurt Shuler of Arteris took a whimsical look at the hype surrounding the IoT, questioning the overall absence of practicality and a seemingly misplaced focus on use cases at the expense of a coherent architecture. I don’t think it is all that bleak, but when it comes to architecture, Kurt is right, and here is the case in terms of sensor clusters.
If all a community has to work with is a hammer, every problem looks like a nail – so it is with the IoT. We’ve been trying to hammer IoT solutions into the classic three-tier telecom hierarchy: core, aggregation or distribution, and edge or access. That hierarchy developed over time as a way to combine high-throughput voice and data traffic on an IP packet-based infrastructure. Trillions of dollars of infrastructure now deliver multimedia services to billions of users.
And it’s almost all the wrong architecture for most IoT applications.
Most things on the IoT don’t stream in lengthy, sustained conversations; rather, they mostly fire byte-oriented messages intermittently. In the name of efficiency, many IoT devices don’t even use IP – it carries significant overhead for short messages, burning watts that battery-powered, wireless devices have in scant supply.
Nonetheless, you, me, and everyone else has to connect to this massive network if we expect to reap the benefits of the cloud or fog, big data, and a host of services mostly also still on the hype cycle. As we would expect, Cisco and others are trying to hammer the IoT, cloud, and fog to shape. In defense of that, re-engineering the core network for the range of IoT protocols is dreaming; at some point, IoT traffic will either be IP native or IP-tunneled as it passes into and through the core.
Kurt correctly pointed out in his blog post “Is Internet of Things Hype ‘Stuck on Stupid’?” that for most IoT applications, the edge or the aggregator is where we need new magic. Many of the IoT devices themselves are powered by microcontrollers – good for basic sensor interfacing and WSN connectivity, but easily overwhelmed by transcoding, analytics, security (encryption, yes; device-based anti-virus, not so much), and other requirements.
Aggregators can be the answer if conditions are right. For instance, most smart meter networks in the US run on ZigBee, and those meters all mesh to their neighbors until they reach an aggregator box, which usually puts them onto the IP network talking to the utility. This particular example benefits from homogeneity, and a green field where connectivity didn’t previously exist – creating a high-level architecture to deal with it wasn’t that hard, most of their efforts have been on reliability and security.
That’s an example of what ARM terms a fixed sensor cluster, and that will resemble many industrial IoT applications. Sensors at the edge connect, in a more-or-less closed network that only authorized devices can get on, and only authorized users can see data flowing through. Outside of the chosen wireless protocol connecting the sensors, this looks familiar to most designers of operational technology.
The consumer IoT is currently dominated by personal sensor clusters, typically using a smartphone or tablet and the existing Wi-Fi access point in the home or office as the gateway. This is why many of the IoT first-movers have opted for Bluetooth Smart or Wi-Fi as their wireless of choice; the benefits of connecting with every smartphone are hard to dispute. There are also ideas like the Revolv hub that can handle a few other popular IoT protocols, like INSTEON and Z-Wave.
The common element in smartphones, gateways, and hubs near the edge? SoCs, with enough horsepower to connect multiple sensor types and manage traffic from a fairly limited number of devices that more or less stay put once entering service. Once configured and connected to the cloud, data from personal clusters can be analyzed in the cloud.
The real thumper for the IoT that ARM describes in their white paper “Sensors as a Service on the Internet of Things” is the agile sensor cluster – with “… endpoints that move over fairly wide ranges and join and leave networks of interest.” This is going to require a whole new class of SoC at both the device and the gateway, able to reconfigure and adapt to networks where the numbers and data formats of participants aren’t necessarily known in advance. Localized real-time traffic analysis, multi-layer switching and protocol conversion, data format equalization, and application service discovery and presentation become vitally important.
This is why the idea of a network-on-chip, such as Arteris NoC technology, becomes so important. In order to save power, reduce complexity, and ease software design in a scenario where processing cores, acceleration units (including DSP or programmable logic), and wireless interfaces are increasingly flexible – in the endgame, potentially reconfigurable – SoC designers need to get beyond simplistic busing and switching into the abstraction NoC offers.
We need to stop thinking so much in terms of existing telecom networks and streaming media, and start thinking more in terms of sensor cluster implementation and optimization, in creating effective SoC architectures for the IoT edge and aggregator.
lang: en_USShare this post via:
There are no comments yet.
You must register or log in to view/post comments.