CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The iPhone 7 Intel Modem Controversy Explained!

The iPhone 7 Intel Modem Controversy Explained!
by Daniel Nenni on 06-18-2016 at 7:00 am

The media is really having a field day on this one so I think it deserves further discussion. The rumor is that Intel has won the modem socket in the iPhone 7. The same rumor was circulating about Intel winning the modem socket for the iPhone 6 and the iPhone 5e so it really has reached urban legend status. The question I have is why does anybody really care? The modem in question was not designed by Intel, it is not manufactured by Intel, and it does not guarantee Intel another iPhone modem socket, so seriously, what is the big deal here?

First a little background: QCOM has supplied iPhone modems for the past few years. My iPhone 5, 5s and 6 has a 28nm QCOM modem. The iPhone 6s has a 20nm QCOM modem and right now QCOM is shipping a 14nm modem which is inside the latest and greatest Samsung smartphone (S7). I’m not going to get into the modem speeds and feeds debate because it is mostly “benchmarking magic.” Seriously, even Harry Potter would be impressed by some of those benchmark claims and we are all carrier speed (Verizon/AT&T) limited anyway so it doesn’t really matter.

According to Bloomberg:

“Apple Inc.’s next iPhone will use modems from Intel Corp., replacing Qualcomm Inc. chips in some versions of the new handset, a move by the world’s most-valuable public company to diversify its supplier base.

Apple has chosen Intel modem chips for the iPhone used on AT&T Inc.’s U.S. network and some other versions of the smartphone for overseas markets, said people familiar with the matter. IPhones on Verizon Communications Inc.’s network will stick with parts from Qualcomm, which is the only provider of the main communications component of current versions of Apple’s flagship product. Crucially for Qualcomm, iPhones sold in China will work on Qualcomm chips, said the people, who asked not to be identified because Apple hasn’t made its plans public.”

So they are telling us that AT&T iPhones will have a 28nm Intel modem while Verizon iPhones will have a QCOM 14nm modem? Would Apple really do this? Is Apple really going to pair a leading edge FinFET SoC with a 28nm planar modem? Let’s not forget about the infamous Battery Gate issue between the Samsung and TSMC versions of the Apple A9 SoC. Some clever fellow even made an app to tell us which chip was inside. I betcha Apple did not see that one coming:

Or will Apple use 28nm modems in all of their iPhones while their fiercest competitor (Samsung) uses QCOMs 14nm modem which is two process generations ahead of Intel?

And how much is Intel actually going to make on a modem chip that they did not design or manufacture? Certainly not enough profit to be noticeable, especially if there is Contra Revenue involved.

In the past I would have chocked this rumor up to media insanity but after Apple split the A9 between Samsung and TSMC I can only shake my head. Apple does punish its suppliers so maybe this is a warning to long time modem partner QCOM. But, by punishing QCOM Apple is rewarding TSM since TSMC manufactures the Intel modem in question so it’s all good for the fabless semiconductor ecosystem.

And just when I thought the semiconductor industry couldn’t be more interesting!


AMD’s 7th Generation APU Brings Many Performance Tweaks And The Last Hurrah Before Zen

AMD’s 7th Generation APU Brings Many Performance Tweaks And The Last Hurrah Before Zen
by Patrick Moorhead on 06-17-2016 at 4:00 pm

Advanced Micro Devices has already told us that 2016 was going to be the year of graphics, but the reality is that they also have a lot going on in their CPU and APU division as well. In fact, in addition to Advanced Micro Devices’s newly announced 7th Generation APUs in 2016, the company is also expected to launch their new Zen CPU cores which are being eagerly awaited by many in the industry. Zen won’t be in high-volume until 2017, so tweaks to the current generation is important. AMD’s 7th generation APUs show that even though we are constantly paying attention to architectural improvements when it comes to CPUs, there are still plenty of platform improvements and tweaks that can be made to squeeze additional performance out of it. It may not get the headlines, but it drives revenue, something the company needs now.


AMD 7th Gen Die Shot (Photo credit: AMD)

Beneficial performance and power upgrades for PCs
The new 7th generation of APUs is similar to the previous generation that it still uses the 28nm fab process and still has four Excavator CPU cores and AMD Radeon GCN GPU cores. However, this new “Bristol Ridge” codenamed part is designed to be much faster than the previous Carrizo APU. Advanced Micro Devices has upgraded the GPU in the Bristol Ridge APU to allow for performance improvements of up to 37% according to AMD’s own numbers. We have not received parts and benchmarked them, but will if we do receive them. The new generation of AMD’s APUs also accomplish multimedia tasks at up to 12% lower power than the previous generation according to Advanced Micro Devices’ numbers.

Many, smaller improvements add up to some big improvements

These performance improvements weren’t a result of a process node shrink but rather a series of smaller improvements that could amount to something like a process node shrink. Advanced Micro Devices did, however, experience a small benefit to frequency and power consumption due to a process node improvement. Advanced Micro Devices has also implemented what are called “Shadow P-states” to achieve the best possible clock speeds for a given chip. Part of what enables the higher clock speeds is Advanced Micro Devices’s implementation of Adaptive Voltage and Frequency Scaling (AVFS).

Advanced Micro Devices also implemented a reliability tracker on the “Bristol Ridge” APUs that allows the chip to operate at its optimal frequency to maximize its life and reliability while also maximizing clock speed. To further maximize performance, Advanced Micro Devices implemented a skin temperature aware power management system that allows Advanced Micro Devices’s APU to utilize the entire device as a thermal sink in order to allow more sustained performance. A special controller was designed to model the thermal capacitance of the given platform and determine the appropriate thermal headroom to allow for certain clock speeds and durations. This feature was originally implemented in Advanced Micro Devices’s smaller Mullins APU, but this is the first time that AMD is implementing it on a larger APU for a bigger market.

All of these improvements are in addition to the upgraded video decoder, faster DDR4 memory support and faster and more GPU cores. This includes support for Google’s VP9 video codec and 4K HEVC which are both necessary for any future video playback capabilities. The new product lineup will range from a 15W Advanced Micro Devices E2-9010 APU with a 2.2 GHz max frequency all the way up to a 34W Advanced Micro Devices FX 9830P with a maximum clock of 3.7 GHz. Advanced Micro Devices now has products that fill many of the different performance and TDP needs for the majority of the market with these parts. One product that Advanced Micro Devices is showcasing with their new Bristol Ridge APUs is HP’s ENVY x360 which is premium convertible notebook that is available now. It says a lot that AMD’s new APU was selected for HP’s premium device as HP is on the rise in this space. I wrote about that here.

HP ENVY X360 (Photo credit: HP Inc.)

An interesting remainder of 2016 for AMD

Advanced Micro Devices is going to have an interesting 2016. The company is clearly working very hard to recapture market share on both CPUs and GPUs and revive their APU line of products. They are clearly making some meaningful improvements and trying to innovate where they can while still staying on the same process node.

Ex chip-jockeys like me are very impressed with what AMD has been able to do with the legacy node


Bring on Zen

Once Advanced Micro Devices switches to 14nm FinFET with their Zen CPUs we can finally start to expect Zen APUs with Polaris GPUs for the graphics. However, in Anshel Sag’s conversations with Advanced Micro Devices executives at Computex, it was indicated that this is unlikely to happen until 2017. Until then, parts like Bristol Ridge are going to have to be the best of what Advanced Micro Devices has to offer with all of the optimizations and tweaks to make it a better and smarter chip than the previous generation.


AMD CEO Lisa Su gives a Zen sneak peak in Taipei (Credit: AMD webcast)

For CPUs, it’s only up from here for AMD. Bring on Zen. And at AMD’s Computex event in Taipei, CEO Lisa Su re-confirmed the 40% IPC improvement with Zen. I don’t know how this is possible, but if it plays out across most workloads, PC and server CPUs just got infinitely more exciting in 2017.


“I AM ZEN” proclaimed AMD at Computex 2016 (Credit: AMD webcast)

More from Moor Insights and Strategy


Custom IC Layout Design at #53DAC

Custom IC Layout Design at #53DAC
by Daniel Payne on 06-17-2016 at 12:00 pm

Last week at the #53DAC conference there was a lot of excitement in the air about custom IC design, especially at the luncheon that I attended on Tuesday from Synopsys where they had customers like STMicroelectronics, GSI Technology, Samsung Foundry and the Synopsys IP group talk about their experiences using the new Custom Compiler tool. The luncheon that they hosted was in the Hilton, and had the best food service at DAC this year, plus the room was packed with interested engineers.

STMicroelectronics
Atul Bhargava started out by explaining that his company helps make products for the smart city, smart home, smart driving and the IoT. Their primary technology is FDSOI and so they create their own libraries, foundation cells, I/O, memories and AMS designs. When they did a survey of their own IC designers it was discovered that their actual time was spent with 30% on transistor and cell placement, 25% in routing and 45% in validation (DRC, LVS, EM, IR drop, reliability).

With the Customer Compiler tool they like using the symbolic editing in Custom Compiler, and how it helps automate digital layout devices. Analog designers can quickly place dummy devices and use common centroid topology automation. Even their analog layout patterns can be saved for future reuse. EM analysis is a quick process in Custom Compiler, even pushing it to an earlier part of the design flow right after schematic capture where they can start to plugin in currents for Peak, Average and RMS, then decide how many vias are required for a detailed layout.

The look and feel of Customer Compiler is very much like a modern web browser, so it’s intuitive and easy to learn.

This group has been able to take an iPDK from 28nm PDSOI and quickly migrate it, plus all of the required IP migrated. Even the SRAM and standard cell groups are now using Custom Compiler.

GSI Technology
Up second was Randy You, and his company was founded in 1995 with offices in both Sunnyvale and Taiwan, growing to 140 people now. Their IC designs are differentiated from competitors by offering higher performance and lower power consumption. On a recent 16nm project they started using the Custom Compiler tools and quickly noticed three areas of design challenge: design rule complexity, EM/IR analysis and balanced net routing.

With the Custom Compiler layout tool they use the Dynamic Rule Distance (DRD) feature to show layout designers if there are any design rule issues as they do custom layout, it even works with double patterning. A full DRC is still required after using the DRD feature, but using DRD reduces the number of full DRC runs required.

In their EM/IR design flow they can start with schematics, run circuit simulations to create currents, then the current values are used to quickly recommend metal widths required for layout, even before running a detailed EM/IR analysis, reducing layout/analysis iterations and saving time.

Balanced nets can now be auto-routed instead of manual routing, the clock tree can be auto routed to all I/O block.

Samsung Foundry
Next we heard from Bonhyuck Koo about their iPDK development group that does FinFET design, uses multi patterning, and combats design rule complexity. On their 14nm FinFET process the Customer Compiler tools were used in a color-aware design flow. They can even do density checks with coloring and have automated coloring.

For DRC and LVS checks they use an in-design approach, instead of solely relying on a batch approach which causes too many iterations to reach a clean layout. Doing early EM checks also enables designers to meet reliability requirements earlier in the tool flow, instead of waiting too late in the flow. The 10nm iPDK is coming next month in July, so stay tuned for the formal announcement.

Synopsys IP Group
Did you know that Synopsys has about 2,400 engineers doing soft IP design and 1,300 AMS IP designers? They use their own Custom Compiler tools for all of their work and have done test chips taped out for Samsung 14 LPE, 14LPP, TSMC 16FFP and others. Synopsys has been doing FinFET designs since 2012. Their custom IC challenges are: DRC complexity, EM/IR analysis, density, performance, coloring, segmentation, and the annoying fact that fin pitch doesn’t equal metal pitch.

Working with the foundries they get early PDK layout rules and start making their new cells with an eye towards allowing changes as the rules get updated. The EM assistant helps reduce iterations and closure for reliability concerns. A video comparison showed the same custom IC layout task with and without all of the Custom Compiler automation features, and the net result was a whopping 7X improvement.

Summary
Users have some real choices today when it comes to custom IC layout design tools, and Synopsys appears to have reached critical mass with their Customer Compiler tools. It’s worth taking a look at their new tools to see how they compare with your existing design flow productivity.


Design for the System Age

Design for the System Age
by Bernard Murphy on 06-17-2016 at 7:00 am

Of late, it has become painfully obvious that the value of electronics is in the system. And since systems demand continuing improvement, increasing performance and decreasing cost (once partially guaranteed by semiconductor process advances) is now sought through algorithm advances – witness the Google TPU and custom fabrics for high-performance server designs. But designing algorithms in RTL would be a masochistic exercise. The right place to do this is in software, whether C/C++, MatLab or similar platforms.

Another important change is in increasing use of software-based verification. Design verification is running on emulators and prototype platforms on more of the software stack, in order to provide confidence and coverage across a range of use models. These factors together mean significant components of the design and overall SoC verification are increasingly centered on software rather than RTL. The Mentor Calypto group have recognized this shift and are building a solution to address both design and verification together at high-level design (with appropriate re-verification at RTL).

The synthesis part of the story is or should be well known by now. Catapult may well be the most capable of the commercial HLS solutions because it can synthesize from both SystemC and C++. It also allows for untimed, loosely-timed or cycle-accurate models, providing the ability to use timing-based constructs where needed but to expand to full C++ for complex algorithm development. And naturally algorithm design, experimentation and synthesis at this level is more productive than at RTL.

The verification part of the story is where this gets really interesting. If you know Calypto, you know they also have high-level property checking (non-temporal today). This is now embedded in the flow, meaning you can check for things like array-bounds errors, incomplete cases statements and user-defined assertions while you are still working with the software model. And since dynamic verification on a software model running software-based tests runs 1000’s of times faster than any RTL-based testbench could hope to run, this is a much better place than RTL to do intensive testing across many realistic use-cases.

The Catapult verification flow also automatically runs verification on the generated RTL using the same high-level testbench and compares results between the two simulations, all tightly coupled to Mentor verification solutions. Further, C-based assertions and cover statementrs will be converted to (synthesizable) OVL or PSL equivalents in the generated RTL so they can be checked in RTL-based verification.

This tight coupling gets you pretty close to RTL verification signoff without needing to lift a finger in RTL testbench generation and debug. Pretty close but not perfect and Mentor freely admitted as much to me. Control logic added during synthesis isn’t tested by the high-level test bench, so isn’t tested in RTL verification either. Verification engineers have to add their own tests for these components, e.g. for stall and reset tests and unreachability, a task which seems not too onerous today, at least based on customer experiences. I wouldn’t be surprised to see Mentor patch some of these automation holes in future releases.

All nice, but customer results are the real test. NVIDIA, on a 10M gate video decoder for Tegra X1, were able to improve design productivity by 50% and cut verification cost by 80% (from an estimated 1000 CPUs for 3 months to 14 CPUs for 2 weeks). In a later spec revision, they were able to re-optimize the IP from 20nm/500Mhz to 28nm/800Mhz in 3 days!

Google built a similar video decoder. They were able to get from start of design to verified RTL twice as fast as they had projected without this flow. And IP modifications were 4X faster. If Google thinks this is a good idea, might want to ponder the way you are doing it today, even if you believe you get better results. In the systems world (aka the majority of our volume market), schedule is generally more important than the highest-possible performance result.

All of which makes me wonder what role RTL will have to play in the future, at least at the system level. Does RTL become the new gate-level, good for emulation and ECOs, but otherwise destined to remain unseen? You can learn more about Catapult advances HERE.

More articles by Bernard…


IBM on the way back and still crazy about IOT

IBM on the way back and still crazy about IOT
by Bill McCabe on 06-16-2016 at 4:00 pm

IBM Update: IOT Transformation on Track?

There have been some interesting developments for Big Blue in the IOT space recently. Last time we reported on them, we were monitoring analysts’ worries about the semiconductor business and other divestures late last year. This year, it seems clear IBM is poised to create even more profitable opportunities in our IOT space. Let’s check in and see where they are.

Healthcare connectivity key to IOT growth

The healthcare giant, Pfizer, recently contracted with IBM to create IOT solutions for clinical trials. In a recent news article, the two have teamed up to create one of the first completely connected clinical trial environment for Pfizer’s Parkinson’s Disease medication.

For enterprise connectivity, Big Pharma has long turned to IBM for its enterprise software used in manufacturing, for finance and accounting and, of course, as an outsourced service desk delivery provider. The move to clinical uses of IBM expertise is not that much of a stretch—and cross-selling to this industry will get easier and easier as use cases — such as the Parkinson’s trial — gain traction.

In the meantime, to prepare for a 2019 launch of this experimental drug, Pfizer and IBM are setting up a “connected house” in Yorktown Heights, NY. About 200 people will live there, with IBM and Pfizer tracking them throughout their days (and, presumably) nights. This control group will help the team test the premise—and also will yield much valuable data for IBM to expand into similar uses for “connected houses.”

Stock recovering mightily- thanks to the Cloud

March saw IBM stock rebounding from lows late last year, largely due to a Morgan Stanley rating that took into account the company’s growth opportunities in the IOT. After experiencing fifteen months of declining revenue, it seems that March’s bounce-back reflects mostly IBM’s perceived power in the cloud.

“Although Amazon (AMZN) continues to lead overall in the cloud space, within the private and hybrid cloud space, IBM looks to be out front. Katy Huberty, an analyst at Morgan Stanley, believes that the market has, in fact, “underappreciated” IBM’s growth potential, as reflected by its share prices.”

The turnaround is related to IBM’s investment in “strategic imperatives… in cloud, analytics, mobile, social, and security technologies” with “IBM’s total cloud revenue (growing by) 57% on a year-over-year basis to $10.2 billion.” Analysts watching this movement will continue to upgrade the stock—and companies looking to invest in gamechanging cloud technologies to gain competitive advantage—will sit up and take notice, as well.

SAP partnership in Cloud computing allows companies to “dip a toe into the IOT”

When we talk about the IOT among ourselves, chances are we are operating from a set of assumptions that the general business community does not share. Everyone sees the opportunity. But some companies don’t have a clear path to leveraging it. Enter an IBM-SAP cloud partnership.

This partnership will allow businesses who want to “dip a toe” into IOT technologies continue to use classic, SAP enterprise infrastructure while introducing cloud-based services over time. The IOT investment might gain sign-off more quickly if the SAP-IBM partnership allows decision-makers to trust their providers more—and which companies are more ensconced in corporate IT than SAP and IBM?

“SAP’s collaboration with the 104-year-old tech giant appeals to established companies that have shied away from outsourcing operations or want use a combination of their own data centers and those in the cloud.”

First Quarter IOT Champs?

So what’s going to happen on April 18, when IBM is scheduled to report 1st Quarter earnings? That depends on who you talk to. Goldman Sachs is maintaining a neutral rating—and the stock is generally thought to be overvalued by about $3 to $10—once again, depending on who you talk to.

As we started out saying, IBM’s focus on healthcare is seen to be its “white knight” in this regard. Using its Watson capabilities, IBM is actively searching for hospital and pharmaceutical partners in oncology, in particular, to build a Watson-based information repository which will “deliver…quick access to the top-tier cancer care exclusive to MSK oncologists, enabling them to provide elite cancer treatment to their patients anywhere in the world.” Using Watson technologies to fine-tune offerings in the IOT, particularly in healthcare, seems to be IBM’s “ticket to ride” for IOT opportunities in the future.

Leveraging its global headquarters for Watson Internet of Things (IoT) in Munich, Germany will be key to IBM’s IOT momentum, as well. Their focus since the center opened in December of 2015 has been on “launching a series of new offerings, capabilities and ecosystem partners designed to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT.” This strategy will play out to its fullest later this year and in the next five years, as the company solidifies its leadership role in the IOT space.

Stay tuned to these pages for more on the players in IOT, or give me a call with IOT recruiting needs. An IOT-enabled CIO responsible for M2M and manufacturing connectivity? Check out our latest article on the IOT-powered ride you’re in for in 2016.


The Business of the Semiconductor Business, Part One: What Happened?

The Business of the Semiconductor Business, Part One: What Happened?
by Woz Ahmed on 06-16-2016 at 12:00 pm

This is the first of an occasional series of articles on the semiconductor industry. Many column inches have covered industry consolidation and in this first article, I aim to explain how the industry reached this point. Later articles will cover subjects including China, joint ventures, emerging players like Brazil and Vietnam, monopolies, M&A, national security/national development, customer concentration, verticalisation/disintermediation, ecosystem venturing, etc. The timing of these will be erratic out of practical necessity and the order of themes…in no particular order.
Continue reading “The Business of the Semiconductor Business, Part One: What Happened?”


The Young and the Restless, PDA vs EDA, Photonic Soaps continued…

The Young and the Restless, PDA vs EDA, Photonic Soaps continued…
by Mitch Heins on 06-16-2016 at 7:00 am

If you’ve followed my last article, The Guiding Light and Other Photonic Soaps, you read my comments about the use of waveguides to “guide the light” in photonic integrated circuits (PICs). This article continues the soap opera theme, this time with the Young and the Restless. My point here is that I am continually struck by the dichotomies between photonic and electronic design. The more these two domains look the same, the more they are different, even down to the engineers with whom I am now finding myself working (more on that later).

The place where I’ve found the dichotomy to be most profound is in the design automation tools used for both industries. In general, the strategy so far has been to try to make photonic design automation (PDA) look as much like electronic design automation (EDA) as possible, even to making the acronyms sound alike (PDA/EDA). The idea here is that eventually these two technologies will eventually merge and since we’ve got more than 3 decades of learning on the EDA side why not right? In fact, the American Institute for Manufacturing Integrated Photonics (AIM Photonics) has already created a work group called EPDA that is looking to do just that. Upon closer inspection of the challenges though, it may turn out that there are lot of reasons for why we might want to take another approach. A few examples might be handy at this point.

The first area of dichotomy is that if ever there was a technology that cried out for “automation” it is photonics. There are several degrees of freedom and dependencies in photonics that can make for a very rich and large solution space to be explored for any given design. In EDA manual interactive processes occur at manual interactive rates, making it difficult to use these processes to explore a large design space for an optimal solution. Why then does EDA seem to want to shoe-horn photonic design into a custom layout paradigm which is inherently manual and interactive? As the saying goes, when you have a hammer, everything looks like a nail. PDA companies seem to have a different approach.

A well-known design methodology in EDA is schematic driven layout (SDL). In electronic design we start with a schematic and spend a lot of time iterating the design between the schematic and simulation before we start to do a layout. The concept of SDL works because we have logical views that share parameters with physical views in a predictable way across foundry processes. In photonics this is not necessarily the case. The functionality of photonic components is highly dependent upon their layout, physical surroundings, temperature and variations in the fabrication process and materials being used. Adding to the complexity is that the interconnect between the photonic components are not merely conductors of light but are in fact active components of the circuit. And… the coup de grace to all of this is the fact that photonic switching is usually done through evanescent coupling where physical components don’t actually touch each other and multiple wavelengths can be switched by a single component. In short we’ve broken several key assumptions for an EDA-based SDL flow and have set ourselves up for a schematic back annotation flow from hell. If you really want to twist your mind on something, think about the implications for layout versus schematic checks (LVS) in this scenario.

If I haven’t driven the point home enough yet, here is another dichotomy for our photonic soap opera. Unlike in EDA where most of the shapes are rectilinear with perpendicular angles, most photonic layout shapes are curvilinear in nature. Smooth curves, adiabatic tapers, spirals, Y-shaped splitters and joiners, grating couplers and circular resonance structures all of which can be drawn at any angle make for a very interesting exercise in a traditional EDA layout tools. Even if the tool has support for natively drawing a curved shape, it will eventually store that shape in the database as fractured discretized rectilinear polygons snapped to some design grid. This causes issues for later edits (like non-orthogonal rotations and re-connections) and physical design rule checking. See Silicon Photonics III, sections 4.2.8.1, and an excellent article in the IET Journals by members of MIT, University of Colorado and University of California, Berkley for more details on how PDA and EDA tools are trying to handle these issues.

So where does this leave us? In the end, we do need a design flow that will enable integrated electronic-photonic designs. Perhaps the “integration” or “assimilation” of PDA into EDA should not be our thrust, but instead we should be looking for convenient bridges that can be built between the two domains. There is good news in all of this and that’s where the “Young and the Restless” come into play. While most of the “not-so-young” EDA cast members were attending the Design Automation Conference last week, I was visiting photonic customers and I was amazed at the number of the “Young and the Restless” with whom I was meeting. Most of them are PhDs just out of school, highly educated and highly motivated professionals who are pushing to make integrated photonics a success. The best part about these young engineers is that they weren’t so tainted by 30+ years of ‘this is how we do EDA’. Instead they were tackling integrated photonics with a fresh new view. For a guy like me, who is still young at heart, it was refreshing to see the new enthusiasm and different thinking. It reminded me of me 30+ years ago when EDA was just beginning. Perhaps it’s time to let the PDA people do what they do best and look for ways to build bridges for them into EDA instead of trying to mold PDA into something it is not.


IoT Tutorial: Chapter 5 – IoT Clouds and Semantic Interoperability

IoT Tutorial: Chapter 5 – IoT Clouds and Semantic Interoperability
by John Soldatos on 06-15-2016 at 12:00 pm

Semantic Interoperability of IoT Data Streams: In the previous chapter of the IoT tutorial we introduced the concept of IoT and cloud computing convergence, while presenting concrete examples of IoT/cloud infrastructures, such as popular public IoT clouds (Xively.com, Thingspeak.com). These infrastructures enable the integration of IoT data streams from different producers/providers within the very same cloud. Indeed, within an IoT cloud infrastructure, multiple IoT applications can be developed and deployed independently. Nevertheless, in most cases there is no easy way to combine data and services from diverse IoT deployments, even in cases where these deployments concern similar or even the same application domain. Consider for example two independent IoT smart energy deployments integrated in the same cloud. Even though it is very likely that their data are similar, there is no easy way to combine them in the scope of a new added-value application e.g., an application calculating the environmental performance or energy saving gains achieved based on both deployments.

This difficulty is due to the heterogeneity of the data formats of the two deployments, but mainly due to their diverse semantics as well. Indeed, data stemming from the two deployments are likely to present their data based on different semantic representations. The latter semantic representations refer to the representation of IoT resources, including units of measurement, mathematical constructs, sensor types and properties and more. This is a serious limitation of existing public IoT clouds, which are limited to supporting vertical silo applications and provide no support for more integrated horizontal applications, notably applications able to combine IoT data and services from multiple IoT deployments.

However, there are semantic web standards (such as ontologies) that provide models for the semantic unification of diverse data streams, thus providing a uniform way for representing them and reasoning over them. Such ontologies provide the means for semantic interoperability of heterogeneous IoT streams at the data level, including data streams that are integrated and stored within the same cloud infrastructure. Hence, a first step to semantic interoperability at the data level is to semantically annotate data streams prior to their streaming and integration within the cloud. This semantic annotation is prescribed by recent initiatives on IoT/cloud semantic interoperability (such as the open source OpenIoT project) and by semantic interoperability efforts within IoT standards (such as the oneM2M standard (http://onem2m.org/)).

Following the semantic annotation of the different IoT data sources/streams, their data and metadata comply with the same semantic model (e.g., ontology), which provides the means for processing the data and the metadata of the streams in a unified way and regardless of their source of origin. Processing of metadata can enable the dynamic selection and filtering of sensors, while processing of data can enable the intelligent selection and filtering of sensor data. The dynamic selection of sensors and devices can enable new model for IoT services provisioning on the cloud, such as Sensing-as-a-Service models, where end-users can dynamically define and access sensing services on demand i.e. services where sensors and sensing functions are selecting and executed dynamically.

The OpenIoT Project

The OpenIoT (openiot.eu) was one of the first projects that provided the means for semantically interoperable integration of IoT data streams in the cloud. It also demonstrated the merit of the “Sensing-as-a-Service” approach. OpenIoT is an open source project available at github, which received the Open Source Rookie award by Black Duck for 2013. OpenIoT incorporates an enhanced version of the popular Global Sensor Networks (GSN) middleware (http://lsir.epfl.ch/research/current/gsn/) (namely X-GSN), which enables the collection of data streams from different IoT sensors and devices based on popular protocols (e.g., CoAP (Constrained Application Protocol)), along with their semantic annotation according to the W3C Semantic Sensor Networks (SSN) ontology (RDF representation) and extensions over it. Semantically annotated streams are stored within a public or private cloud infrastructure (e.g., the Amazon EC2 public cloud or private clouds build with the open source OpenStack middleware). Over this cloud infrastructure, OpenIoT has implemented a range of tools for application monitoring and development.

The OpenIoT project provides the following main functionalities:

  • Deployment and Registration of a sensor or internet connected device: OpenIoT enables the integration of virtually any internet connected sensor or device to its cloud infrastructure. This is based on the interfacing of the sensor to the X-GSN middleware, which accordingly undertakes the semantic annotation of the sensor and its registration in the OpenIoT cloud. The process is facilitate by a visual tool (Schema Editor) provided by the OpenIoT project and requires the implementation of a low level interface between the device and the X-GSN middleware. The latter process is typically a matter of 1-2 man days.
  • Dynamic discovery of sensors and internet connected devices: OpenIoT provides functions and utilities for the dynamic discovery of sensors and internet connected devices, independently of their source of origin. Discovery is based on querying the RDF repository of sensors/devices, which reside in the OpenIoT cloud. The discovery process takes into account the metadata of the sensors or devices, including their type and location.
  • Visual IoT Service definition and deployment: OpenIoT offers a development environment, which enables users to develop applications (notably sensor queries) through the visual definition of data processing workflows over the semantically interoperable IoT sensors that are integrated in the cloud. The tool enables the visual construction of SPARQL queries, over the RDF representations of the sensors and their data. Accordingly, it enables the deployment of the IoT service / query in the cloud. The tools is web-based and multi-user, taking into account the sensors and services that each user is entitled to access depending on its authentication credentials.
  • IoT services visualization (via Mashups): OpenIoT provides a mashup library, which enables the visualization of the services (notably sensor queries). Mashups based visualization functionalities are provided by most public IoT cloud mentioned earlier in this tutorial. The OpenIoT mashup library and related visualization functionalities are integrated with the visual service definition and deployment functionalities within the OpenIoT integrated development environment of the project.
  • Resource Management and Optimization: OpenIoT provides several resource management and optimization functionalities (such as data caching, publish-subscribe optimization) and more.

OpenIoT has already a community of users, which take advantage of the project for research and academic purposes, even though the project has already been deployed in the scope of pilot deployments in enterprise environments. Following figures provide snapshots of the OpenIoT architecture, the OpenIoT mashups and the tool for visual definition of IoT services.



Applications of Data Level Semantic Interoperability
Data-level semantic interoperability is only a small part of the wider problem of IoT interoperability, which has been introduced in an earlier chapter. However, the semantic interoperability across IoT streams from different sources provides already a sound basis for a number of added-value applications in various areas including:

  • Smart Cities: In smart cities, there is nowadays a need to integrate and manage information stemming from a large number of different IoT deployments, which have been planned and carried out independently from each other. In several cases these deployments concerns similar applications (e.g., smart energy, urban mobility) and provide similar data (e.g., data about transport or energy). Nowadays there is no easy way to combine data from these deployments in order to implement new integrated management applications (e.g., city-wide monitoring of environmental performance) or even operational applications (e.g., holistic management of urban mobility). Data level semantic interoperability (based on platforms such as OpenIoT) can indeed facilitate the development of such added-value management infrastructures and application. This is discussed in more detailed in the scope of subsequent chapter on urban mobility.
  • IoT Experimentation: The lack of data-level semantic interoperability is a set back to the development of IoT experiments based on data from multiple IoT testbeds (e.g., air quality data from different infrastructures). The adoption of approaches such as OpenIoT enable the design and execution of more integrated experiments that leverage data from diverse IoT sources, systems and platforms.

These are some examples of the merits of semantic interoperability of diverse IoT resources, which is only scratching the surface of the wider problem of IoT interoperability. Additional aspects and solutions for IoT interoperability will be discussed in subsequent chapters, as part of IoT interoperability standards and applications that leverage interoperability functionalities.

Resources for Further Reading
1) OpenIoT is an open source project. Its source code and documentation are available through the following links:

2)An overview description of OpenIoT is available at the following paper:
John Soldatos, Nikos Kefalakis, Manfred Hauswirth, Martin Serrano, Jean-Paul Calbimonte, Mehdi Riahi, Karl Aberer, Prem Prakash Jayaraman, Arkady B. Zaslavsky, Ivana Podnar Zarko, Lea Skorin-Kapov, Reinhard Herzog: OpenIoT: Open Source Internet-of-Things in the Cloud. OpenIoT@SoftCOM 2014: 13-25

3) Some the resource management capabilities of OpenIoT are discussed at:
Kefalakis, S. Petris, C. Georgoulis, J. Soldatos, “Open Source Semantic Web Infrastructure for Managing IoT Resources in the Cloud”, Chapter in the book: Internet of Things: Principles and Paradigms, Rajkumar Buyya and Amir Vahid Dastjerdi (eds.).

View all IoT Tutorial Chapters


IoT Tutorial: Chapter 4 – Internet of Things in the Clouds

IoT Tutorial: Chapter 4 – Internet of Things in the Clouds
by John Soldatos on 06-15-2016 at 7:00 am

The advent of cloud computing has acted as a catalyst for the development and deployment of scalable Internet-of-Things business models and applications. Therefore, IoT and cloud are nowadays two very closely affiliated future internet technologies, which go hand-in-hand in non-trivial IoT deployments. Furthermore, most modern IoT ecosystems up-to-date are cloud-based, as will be illustrated in subsequent chapters of the tutorial. Prior to describing the essence of IoT and cloud computing integration, we briefly introduce the main cloud computing concepts. Αn in-depth presentation of cloud computing can be found in relevant textbooks such as Mastering Cloud Computing by Rajkumar Buyya et. Al.

Cloud Computing Basics
Cloud computing is the next evolutionary step in Internet-based computing, which provides the means for delivering ICT resources as a service. The ICT resources that can be delivered through cloud computing model include computing power, computing infrastructure (e.g., servers and/or storage resources), applications, business processes and more. Cloud computing infrastructures and services have the following characteristics, which typically differentiate them from similar (distributed computing) technologies:

  • Elasticity and the ability to scale up and down: Cloud computing services can scale upwards during high periods of demand and downward during periods of lighter demand. This elastic nature of cloud computing facilitates the implementation of flexibly scalable business models, e.g., through enabling enterprises to use more or less resources as their business grows or shrinks.
  • Self-service provisioning and automatic deprovisioning: Contrary to conventional web-based Application Service Providers (ASP) models (e.g., web hosting), cloud computing enables easy access to cloud services without a lengthy provisioning process. In cloud computing, both provisioning and de-provisioning of resources can take place automatically.
  • Application programming interfaces (APIs): Cloud services are accessible via APIs, which enable applications and data sources to communicate with each other.
  • Billing and metering of service usage in a pay-as-you-go model: Cloud services are associated with a utility-based pay-as-you-go model. To this end, they provide the means for metering resource usage and subsequently issuing bills.
  • Performance monitoring and measuring: Cloud computing infrastructures provide a service management environment along with an integrated approach for managing physical environments and IT systems.
  • Security: Cloud computing infrastructures offer security functionalities towards safeguarding critical data and fulfilling customers’ compliance requirements.

The two main business drivers behind the adoption of a cloud computing model and associated services including:

  • Business Agility:Cloud computing alleviates tedious IT procurement processes, since it facilitates flexible, timely and on-demand access to computing resources (i.e. compute cycles, storage) as needed to meet business targets.
  • Reduced Capital Expenses: Cloud computing holds the promise to lead to reduced capital expenses (i.e. IT capital investments) (CAPEX), through enabling conversion of CAPEX to operational expenses (i.e. paying per month, per user for each service) (OPEX). This is due to the fact that cloud computing enables flexible planning and elastic provisioning of resources instead of upfront overprovisioning.

Depending on the types of resources that are accessed as a service, cloud computing is associated with different service delivery models.

  • Infrastructure as a Service (IaaS): IaaS deals with the delivery of storage and computing resources towards supporting custom business solutions. Enterprises opt for an IaaS cloud computing model in order to benefit from lower prices, the ability to aggregate resources, accelerated deployment, as well as increased and customized security. The most prominent example of IaaS service Amazon’s Elastic Compute Cloud (EC2), which uses the Xen open-source hypervisor to create and manage virtual machines.
  • Platform as a Service (PaaS): PaaS provides development environments for creating cloud-ready business applications. It provides a deeper set of capabilities comparing to IaaS, including development, middleware, and deployment capabilities. PaaS services create and encourage deep ecosystem of partners who commit to this environment. Typical examples of PaaS services are Google’s App Engine and Microsoft’s Azure cloud environment, which both provide a workflow engine, development tools, a testing environment, database integration functionalities, as well as third-party tools and services.
  • Software as a Service (SaaS): SaaS services enable access to purpose-built business applications in the cloud. Such services provide the pay-go-go, reduced CAPEX and elastic properties of cloud computing infrastructures.

Cloud services can be offered through infrastructures (clouds) that are publicly accessible (i.e. public cloud services), but also by privately owned infrastructures (i.e. private cloud services). Furthermore, it is possible to offer services supporting by both public and private clouds, which are characterized as hybrid cloud services.

IoT / Cloud Convergence
Internet-of-Things can benefit from the scalability, performance and pay-as-you-go nature of cloud computing infrastructures. Indeed, as IoT applications produce large volumes of data and comprise multiple computational components (e.g., data processing and analytics algorithms), their integration with cloud computing infrastructures could provide them with opportunities for cost-effective on-demand scaling. As prominent examples consider the following settings:

  • A Small Medium Enterprise (SME) developing an energy management IoT product, targeting smart homes and smart buildings. By streaming the data of the product (e.g., sensors and WSN data) into the cloud it can accommodate its growth needs in a scalable and cost effective fashion. As the SMEs acquires more customers and performs more deployments of its product, it is able to collect and manage growing volumes of data in a scalable way, thus taking advantage of a “pay-as-you-grow” model. Moreover, cloud integration allows the SME to store and process massive datasets collected from multiple (rather than a single) deployments.
  • A smart city can benefit from the cloud-based deployment of its IoT systems and applications. A city is likely to deploy many IoT applications, such as applications for smart energy management, smart water management, smart transport management, urban mobility of the citizens and more. These applications comprise multiple sensors and devices, along with computational components. Furthermore, they are likely to produce very large data volumes. Cloud integration enables the city to host these data and applications in a cost-effective way. Furthermore, the elasticity of the cloud can directly support expansions to these applications, but also the rapid deployment of new ones without major concerns about the provisioning of the required cloud computing resources.
  • A cloud computing provider offering pubic cloud services can extend them to the IoT area, through enabling third-parties to access its infrastructure in order to integrate IoT data and/or computational components operating over IoT devices. The provider can offer IoT data access and services in a pay-as-you-fashion, through enabling third-parties to access resources of its infrastructure and accordingly to charge them in a utility-based fashion.

These motivating examples illustrate the merit and need for converging IoT and cloud computing infrastructure. Despite these merits, this convergence has always been challenging mainly due to the conflicting properties of IoT and cloud infrastructures, in particular, IoT devices tend to be location specific, resource constrained, expensive (in terms of development/ deployment cost) and generally inflexible (in terms of resource access and availability).

On the other hand, cloud computing resources are typically location independent and inexpensive, while at the same time providing rapid and flexibly elasticity. In order to alleviate these incompatibilities, sensors and devices are virtualized prior to integrating their data and services in the cloud, in order to enable their distribution across any cloud resources. Furthermore, service and sensor discovery functionalities are implementing on the cloud in order to enable the discovery of services and sensors that reside in different locations.

Based on these principles the IoT/cloud convergence efforts have started since over a decade i.e. since they very early days of IoT and cloud computing. Early efforts in the research community (i.e. during 2005-2009) have focused on streaming sensor and WSN data in a cloud infrastructure. Since 2007 we have also witnessed the emergence of public IoT clouds, including commercial efforts. One of the earliest efforts has been the famous Pachube.com infrastructure (used extensively for radiation detection and production of radiation maps during earthquakes in Japan). Pachube.com has evolved (following several evolutions and acquisitions of this infrastructure) to Xively.com, which is nowadays one of the most prominent public IoT clouds.

Nevertheless, there are tens of other public IoT clouds as well, such as ThingsWorx, ThingsSpeak, Sensor-Cloud, Realtime.io and more. The list is certainly non-exhaustive. These public IoT clouds offer commercial pay-as-you-go access to end-users wishing to deploying IoT applications on the cloud. Most of them come with developer friendly tools, which enable the development of cloud applications, thus acting like a PaaS for IoT in the cloud.

Similarly to cloud computing infrastructures, IoT/cloud infrastructures and related services can be classified to the following models:

  • Infrastructure-as-a-Service (IaaS) IoT/Clouds: These services provide the means for accessing sensors and actuator in the cloud. The associated business model involves the IoT/Cloud provide to act either as data or sensor provider. IaaS services for IoT provide access control to resources as a prerequisite for the offering of related pay-as-you-go services.
  • Platform-as-a-Service (PaaS) IoT/Clouds: This is the most widespread model for IoT/cloud services, given that it is the model provided by all public IoT/cloud infrastructures outlined above. As already illustrate most public IoT clouds come with a range of tools and related environments for applications development and deployment in a cloud environment. A main characteristic of PaaS IoT services is that they provide access to data, not to hardware. This is a clear differentiator comparing to IaaS.
  • Software-as-a-Service (SaaS) IoT/Clouds: SaaS IoT services are the ones enabling their uses to access complete IoT-based software applications through the cloud, on-demand and in a pay-as-you-go fashion. As soon as sensors and IoT devices are not visible, SaaS IoT applications resemble very much conventional cloud-based SaaS applications. There are however cases where the IoT dimension is strong and evident, such as applications involving selection of sensors and combination of data from the selected sensors in an integrated applications. Several of these applications are commonly called Sensing-as-a-Service, given that they provide on-demand access to the services of multiple sensors. Note that SaaS IoT applications are typically built over a PaaS infrastructure and enable utility based business models involving IoT software and services.

These definitions and examples provide an overview of IoT and cloud convergence and why it is important and useful. More and more IoT applications are nowadays integrated with the cloud in order to benefit from its performance, business agility and pay-as-you-go characteristics.

In following chapters of the tutorial, we will present how to maximize the benefits of the cloud for IoT, through ensuring semantic interoperability of IoT data and services in the cloud, thus enabling advanced data analytics applications, but also integration of a wide range of vertical (silo) IoT applications that are nowadays available in areas such as smart energy, smart transport and smart cities. We will also illustrate the benefits of IoT/cloud integration for specific areas and segments of IoT, such as IoT-based wearable computing.

View all IoT Tutorial Chapters