IoT Tutorial: Chapter 6 – IoT at the Edge

IoT Tutorial: Chapter 6 – IoT at the Edge

IoT and the Edge Computing Paradigm – Why Edge Computing? In previous chapters we illustrated how cloud computing enables nowadays IoT applications to benefit from its capacity, scalability, elasticity and pay-as-you-go nature. During recent years a number of IoT/cloud deployments have demonstrated the merits of integrating IoT and cloud. Nevertheless, the proliferation of IoT applications, including BigData applications that comprise IoT streams seems to drive conventional centralized cloud architectures to their limits.

For example, in large scale applications, the integration of millions IoT streams in the cloud leads to very high bandwidth consumption, significant network latency and the need to store a large amount of information with limited (or even zero) business value. This is typically the case when sensor data that does not frequently change (e.g., temperature information) is streamed in the cloud. Likewise, IoT applications are increasingly asking for higher flexibility in handling multiple distributed and heterogeneous devices, in a way that provides scalability and effective handling of data security and privacy, especially in cases where users need to limit access to their private data.

Also, data intensive IoT applications (including BigData applications) ask for a cost-effective and resource efficient handling of data streams, which might involve storage and processing of data closer to end-users and physical systems. Typical examples of such IoT and BigData applications are discussed in latter chapter and can be found in the areas of next generation manufacturing (Industrie 4.0), smart cities and media).

In order to cope with these requirements, the cloud industry has recently introduced the edge computing paradigm (also called “fog computing”), which extends conventional centralized cloud infrastructures with an additional storage and processing layer that enables execution of application logic close to end-users and/or devices. Edge computing foresees the deployment of edge/fog nodes, which may range from IoT embedded devices featuring limited storage, memory and processing capacity to whole data centers (i.e. “local clouds”) which are deployed close to end-users and physical infrastructures.

Overall, the edge/fog computing paradigm extends the cloud paradigm to the edge of the network. In this way, it also appropriate for serving mobile users, who typically have local short-distance high-rate connections and hence can benefit from computing, storage and communication resources at their vicinity, instead of interfacing to a centralized back-end cloud. The proximity of resources helps overcoming the high-latency that is associated with the provision of cloud services to mobile users.

Taxonomy of IoT Edge Computing Applications
Edge computing has therefore been introduced in order to deal with IoT applications that suffer from the limitations and poor scalability of the centralized cloud paradigm, including:

  • Applications susceptible to latency, such as nearly real-time IoT applications. Typical examples of such applications are the ones based on Cyber-Physical Systems in the areas of manufacturing, urban transport and smart energy, where actuation has to be driven in real-time, based on data processing close to the physical infrastructures. Note that several of these applications require also predictable latency, which is hardly possible when processing data and invoking services from the back-end cloud.
  • Geo-distributed applications, including sensor networks applications, which have to cope with data processing at local level, prior to streaming data to the cloud. The edge/fog computing paradigm enables these applications to deal with large scale geographically and administrative dispersed deployments. Typical examples include environmental monitoring and smart city applications, which are based on the collection and processing of streams from thousands or even millions of distributed sensors. Edge nodes enable the decentralization of these applications, thus facilitating scalable processing and boosting significant bandwidth savings.
  • Mobile applications, notably applications involving fast moving objects (e.g., connected vehicles, autonomous cars, connected rail). As already outlined these applications require interfacing of moving devices/objects to local resources (computing, storage) residing at their vicinity.
  • Large-scale distributed control systems (smart grid, connected rail, smart traffic light systems), which typically combine properties of the geo-distributed and real-time applications outlined above. Edge computing deployments enable them to deal with scalability and latency issues.
  • Distributed multi-user applications with privacy implications and need for fine-grained privacy control (such as processing of personal data). These applications can benefit from a decentralization of the storage and management of private data to the various edge servers, thus alleviating the risk of transferring, aggregating and processing all private datasets at the centralized cloud. Furthermore, edge computing deployments enable end-users to have better and isolated control over their private data within the edge servers.

Mobile Edge Computing
Edge computing deployments that involve mobile and roaming devices are characterized as “Mobile-Edge” Computing deployments. They provide the means for accelerating content, services and applications to be accelerated, while ensuring increased responsiveness from the edge/cloud infrastructure. Moreover, they facilitate the engagement of mobile services operators, which can offer their services taking into account the radio and network conditions at the edge servers’ vicinity. “Mobile-Edge” computing deployments are characterized by the following properties:

  • On-premise isolation: Mobile-edge deployment can be isolated from the rest of the IoT network, thus enabling M2M applications that required increased security. Indeed, this isolation renders M2M applications less prone to errors at other points of the network.
  • Proximity for low-latency processing: Edge servers enable access to mobile devices, while running services (including data intensive services) close to them. This reduces latency and provides a basis for bandwidth savings and optimal user experience (e.g., due to better responsiveness).
  • Location and context awareness: Edge servers are typically associated with specific locations and can therefore enable location based services, which is a prominent class of mobile/roaming services. Furthermore, this enables a wave of new business services that utilize network context and locations, such as services associated with users’ context, points of interest and events.

IoT/Edge Computing Convergence Challenges
Edge computing based deployments of IoT applications are still in their infancy, even though we expect them to proliferate due to their clear economic benefits (e.g., bandwidth savings, energy savings) and enhanced functionalities when compared to conventional IoT/cloud deployments. Nevertheless, a number of technical challenges exist, which have recently given rise to additional technological developments. One of the challenges, concern the integration of IoT nodes with the cloud, given the fact that cloud infrastructures are typically based on powerful computing nodes which do not support embedded systems.

In order to address this challenge, edge computing infrastructures integrate containers (e.g., Docker or the Open Container Initiative) as virtualization technology for application deployment and enactment. Based on their lightweight and resource-efficient nature, container technologies are appropriate for supporting rapid and flexible deployment of application logic in control nodes. Containers are indeed much smaller in size comparing to Virtual Machines (VMs), which facilitates their rapid transportation and deployment to edge devices/nodes outside the high speed networks of the cloud data center. Note that early efforts towards integrating container virtualization into edge computing are tailored to architectures and infrastructures of a single vendor and are not appropriate for large scale deployments comprising heterogeneous virtualization infrastructures and containers.

Likewise, there are no tools and techniques providing easy ways for run-time deployment and management of heterogeneous container technologies in the scope of large scale edge computing infrastructures. Thus, several edge computing deployments have still to deal with conventional high-overhead virtualization infrastructures.

Another challenge relates to the development and deployment of techniques for efficient distributed data analytics, given that non-trivial edge computing applications (such as large scale applications and/or application that are subject to QoS (Quality-of-Service) constraints) need to dynamically distribute analytics functions for an optimal placement between the edge and the cloud. At the same time efficient data analytics at the edge of the network is a key prerequisite for supporting a large number of real-time control applications, which exploit high-performance edge processing of data streams in order to drive actuation functionalities.

Moreover, despite the emergence of IoT analytics platforms, there are still no frameworks that can effectively deal with semantic interoperability across diverse high-velocity data streams i.e. in way that alleviates heterogeneity not only in terms of diverse data formats and protocols, but also in terms of the semantics of data streams. State-of-the-art streaming engines (e.g., Complex Event Processing (CEP) systems) for cloud computing, are not appropriate for edge computing, given their poor-performance when dealing with networked streams and associated requirements for very low latency.

Furthermore, state-of-the-art platforms are not able to dynamically allocate data processing to different devices at runtime, as well as to dynamically realize the optimum “split” between centralized (i.e. on central cloud infrastructures) and decentralized (i.e. on the edge) processing at runtime. The issue of IoT analytics will be more extensively discussed in coming chapters, notably chapter dedicated to IoT analytics and IoT/BigData convergence.

IoT deployments based on edge computing present also new security and privacy challenges. Indeed, they involve physical systems, sensing devices, embedded systems, smart embedded systems etc. which interact with multiple cloud infrastructures, thus imposing needs for trustworthiness at multiple layers, including secure and privacy friendly operation of containers, and secure information exchange across networks and clouds. Such challenges are become more pressing as several IoT applications (e.g., healthcare) involve processing of private data at the edge.

Finally, there is a need for “Edgification” of conventional applications. Significant effort has been recently allocated in the process of migrating conventional distributed services (e.g., SOA services) to the cloud. Nevertheless, the issue of migrating services in an edge computing environment involves the dynamic (re-)distribution of the service across edge and cloud parts and has not been adequately addressed yet.

In coming years we will be witnessing an increased number of edge computing deployments, which will be addressing the above-listed challenges. The proliferating number of edge deployments will be a direct result of the need to save bandwidth and energy costs, deal with scale and the resource constrained nature of IoT devices in a way that still leverages virtualization, as well as the need to increase data privacy for end-users.

Resources for Further Reading
For a general introduction to the fog/edge computing paradigm, please consult the following articles:

  • Flavio Bonomi, Rodolfo Milito, Jiang Zhu, Sateesh Addepalli, «Fog computing and its role in the internet of things», Proceedings of the first edition of the MCC workshop on Mobile cloud computing, MCC ’12, pp 13-16.
  • Flavio Bonomi, Rodolfo Milito, Preethi Natarajan, and Jiang Zhu. 2014. Fog computing: A platform for internet of things and analytics. In Big Data and Internet of Things: A Roadmap for Smart Environments. Springer, 169–186.
  • Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud computing: A survey,” Futur. Gener. Comput. Syst., vol. 29, no. 1, pp. 84–106, 2013.
  • Shiraz, A. Gani, R. H. Khokhar, and R. Buyya, “A Review on Distributed Application Processing Frameworks in Smart Mobile Devices for Mobile Cloud Computing,” Commun. Surv. Tutorials, IEEE, vol. 15, no. 3, pp. 1294–1313, 2013.

Edge Computing Applications and Architectures for specific application areas are discussed in the following publications, papers and books:

  • IEva Geisberger/Manfred Broy (Eds.): Living in a networked world. Integrated research agenda Cyber-Physical Systems (agendaCPS) (acatech STUDY), Munich: Herbert Utz Verlag 2014.
  • NMGN white paper Audiovisual media services and 5G – https://tech.ebu.ch/docs/public/5G-White-Paper-on-Audiovisual-media-services.pdf
  • Industrial Internet Consortium, “Industrial Internet Reference Architecture”, version 1.7, June 2015.

Edge Computing Infrastructures and Applications are discussed in other LinkedIn posts as well, for example:

View all IoT Tutorial Chapters