CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Where are the Entrepreneurs?

Where are the Entrepreneurs?
by Randy Smith on 05-22-2016 at 10:00 am

This week I attended the UpWest Labs event in San Francisco. UpWest Labs provides seed funding and incubation for a wide range of domains including Enterprise Software, Internet of Things, Infrastructure Technologies, Artificial Intelligence, Consumer Applications, Drones, Cyber Security, Augmented Reality / Virtual Reality, and Marketplaces. Uniquely, UpWest seeks out entrepreneurs in Israel and helps them gain a foothold and build momentum in the US. It is a formula which seems to be thriving.

I visited Israel last year to speak on a couple panels, one an electronics industry event and the other an event at the engineering school of Bar Ilan University. At both events I was struck by the energy, enthusiasm and creativity of the local engineers attending the event. As I went about my week I learned that there are hundreds of high-tech start-ups in Israel attacking a diverse array of markets and challenges. Within the semiconductor and EDA community, we have not seen such a large number of start-ups in a very long time. It was quite refreshing.

The UpWest event was hosted at UpWest investment, HoneyBook’s SF headquarters. HoneyBook’s mission statement claims they are on a mission to connect the different parts of the events industry by reimagining the way creative professionals work with their colleagues and clients. For anyone planning an event, HoneyBook’s goal is to organize your job(s) seamlessly, so you can focus on what you do best: creating unforgettable moments. While it is not as complicated as the wide array of problems solved by EDA tools, it is valuable. HoneyBook was only founded in 2013, already has over 60 employees and is profitable. Also note that BVA (where I am now an advisor), a recruiting firm that staffs and invests in start-ups, has built and invested in multiple UpWest deals, including Clicktale, and CoMeet.

A quick look at the portfolio of UpWest Labs will reveal many interesting companies, including some with a high-tech flavor. One of these at this that caught my eye at the event was, AR-Cadia. AR-Cadia’s CEO and founder, Teddy Bercovitz, gave me a demonstration of some mobile apps that were using the AR-Cadia platform. “AR” here stands for augmented reality. Augmented reality glasses can display additional information to the wearer while letting them still see their environment. This has enormous possible uses. Two examples are in the medical field giving doctors the ability to see different images or scans superimposed over the person’s body, or inventory management by showing the organization or location of products within a warehouse. AR-Cadia has built a platform that allows mobile app developers to support a wide array of AR glasses from many manufacturers without needing to write drivers for each manufacturer – a fantastic idea that may dramatically accelerate this new market.

Alas, there were no EDA start-ups present, although I did see Jacques Benkoski, former CEO of Monterey Design Systems and current USVP partner, in attendance. I also saw some other VCs at the event who invested in EDA 15 or 20 years ago. Many believe that the heyday of EDA start-ups is over, never to return again. The economics are certainly different now. Larger EDA exits seem to take more than 10 years. The exits are mostly confined to selling the business to one of the major EDA companies who can then deploy the new technology through their much larger sales channels. This leads most people familiar with the industry to conclude that EDA start-up companies should not raise more than $6M over their lifetime in order to have a worthwhile exit. These start-ups also need to have products in a niche or underserved space, or have a defensible breakthrough technology in an existing tool area – both are difficult.

I was recently heartened to see a new EDA company – SnapEDA. The CEO of SnapEDA, Natasha Baker, is young, energetic and passionate about being in EDA. In years past I was always complaining at EDAC (now ESDA) meetings that we were not spending the effort to attract enough young engineers into EDA to even have an EDA community in the future. I had nightmares of rolling down the aisles at DAC in 25 years and only seeing participants in wheelchairs and walkers. Natasha and SnapEDA give me hope that my nightmares will not come true.

Randy Smith is an advisor atBrown Venture Associates (BVA). Randy and BVA are experienced masters of team building. As a former EDA and semiconductor IP executive, Randy has repeatedly built winning teams. BVA has been doing the same for more than 20 years while building a proprietary recruiting methodology that is geared for the modern recruiting environment. BVA also has invested in 50+ companies with 20+ liquidity events – the teams they build, win.


Donald Trump’s demand that Apple must make iPhones in the U.S. actually isn’t that crazy

Donald Trump’s demand that Apple must make iPhones in the U.S. actually isn’t that crazy
by Vivek Wadhwa on 05-21-2016 at 7:00 am

Donald Trump has promised that “we’re gonna get Apple to start building their damn computers and things in this country, instead of in other countries.” He said this ata speech at Virginia’s Liberty University andseveralother events. It is very likely that he is not serious; Trump tends to say things he couldn’t possibly mean. But he did raise an intriguing question about whether Apple — and other American companies — could bring manufacturing back to the United States.

When American companies moved manufacturing to China, it was all about cost. China’s wages were amongst the lowest in the world and its government provided subsidies and turned a blind eye to labor abuse and environmental destruction. Things have changed. China’s labor, real estate, and energy costs have increased to the point that theyare comparable to some parts of the United States. Subsidies are harder to get and Chinese labor is not tolerating the abuse that it once did. China is now a more expensive place to manufacture than Indonesia, Thailand, Mexico, and Indiaaccording to Boston Consulting Group.

Add to this the efforts by the Chinese government to spur indigenous innovation — by forcing foreign companies to reveal their intellectual property and use local suppliers—and you have strong motivation to relocate manufacturing.

But Apple is by no means looking to exit from China, its second largest market. It justannounced an investment of $1 billion in Uber’s rival Didi Chuxing. It clearly saw a large market opportunity and a way to appease the Chinese government.

Technology is, however, changing the labor-cost equation even more and China is becoming unpredictable because of its faltering economy. It may make sense for Apple to locate some of its manufacturing closer to other markets just to protect itself from this uncertainty.

What is changing the labor situation is robotics. Robots can now do the same manufacturing jobsas humans — for a fraction of the cost. A new generation, from companies such as Rethink Robotics of Boston, ABB of Switzerland, and Universal Robots of Denmark, are dexterous enough to thread a needle and nimble enough to work beside humans. They can do repetitive and boring circuit board assembly and pack boxes. These robots cost less than $40,000 to purchase andas little as a dollar per hour to operate. And unlike human workers, they will work 24-hour shifts without complaining.

The hurdle in relocating manufacturing for any company such as Apple is the tie to the chain of suppliers of its products’ electronics components. The key question therefore is: how dependent is Apple on its China supply chain?

In 2015, the supply chain for Apple’s products consisted of 198 global companies with 759 subsidiaries — so this is quite complex. Seamus Grimes of National University of Ireland and Yutao Sun of Dalian University of China studied each of these subsidiaries and interviewed executives of those located in China. The objective of their research was to advise China on how it could move further up the value chain and cause foreign companies to give it more of their intellectual property. The paperthey published, however, provides another interesting insight: into how few of Apple’s technology suppliers are actually Chinese.

The authors researched each of the 759 subsidiaries and categorized the electronics components into core, non-core, and assembly-related, with the high-cost, intellectual-property dependent technologies being designated as core. They learned that 336, or 44.2 percent, of these subsidiaries were manufacturing in China; 115 were in Taiwan; and 84 in Europe or the United States.

When the researchers looked into the ownership of subsidiaries that were manufacturing in China, they found that only 3.95 percent were Chinese. And only 2.2 percent of the core component suppliers were Chinese. The largest proportion, 32.7 percent, were Japanese; 28.5 percent were American; 19.0 percent were Taiwanese; and 6.5 percent were European.

To put it simply, more than half of the components of Apple’s products are imported into China and practically none of the important, core, technologies are made by Chinese companies. Foreign companies do not trust China and nearly all of the intellectual property in Apple’s products originates from outside it.

This means that the value chains could be shifted over time. This begs the question: what it would cost to move manufacturing to the United States?

For this, it may be best to look at what Apple’s manufacturing partner Foxconn is doing in India. The Economic Timesreports that Foxconn is finalizing negotiations to build a $10 billion facility to manufacture iPhones in India. The report anticipates it will take 18 months to get this operational.

India does have a labor cost advantage over the U.S. but robots could eliminate this. Similar manufacturing facilities could be set up in the United States, product by product.

Of course, this will not be easy and there are many risks. But it certainly is possible for Apple to bring manufacturing back to the United States. If Apple can do this, so can most other companies; their value chains are a lot less complex than Apple’s.

So it may turn out that for once, Donald Trump’s rant isn’t so crazy.

Here is an interview on CNBCs Squawk Box with Becky Quick. For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com.


Cache Coherent Systems Get a Boost from New Technology

Cache Coherent Systems Get a Boost from New Technology
by Tom Simon on 05-20-2016 at 12:00 pm

The speed and power penalties for accessing system RAM affect everything from artificial intelligence platforms to IoT sensor nodes. There is a huge power and performance overhead when the various IP blocks in an SOC need to go to DRAM. Memory caches have become essential to SOC design to reduce these adverse effects. However, ensuring cache coherency across all the local caches and system RAM can be tricky. The problem was not so bad when there were fewer IP’s that required caches, but things have changed.

Up until now, cache coherency solutions were typically manually created for point to point connections to DRAM or specific IP existed for cache coherency to serve predetermined use models. However, it looks like Arteris has announced what might be game changing technology for implementing cache coherent systems on a wide range of SOC’s. Their announcement on May 17[SUP]th[/SUP] states that the technology can be used to connect a large variety of IP, including those with differing cache coherency protocols, semantics and sizes. It can even be configured to provide the benefits of caching to IP blocks that do not support local caches.

It’s expected that Arteris will offer more details about the technology and its implementation in the coming weeks. Nonetheless, a lot can be gleaned from the recent announcement. First and foremost, it’s apparent that Arteris is using their robust and proven FlexNoC on-chip interconnect IP as a building block for this technology. Arteris is already adept at moving data around on SOC’s, where the data sizes and protocols vary dramatically. It makes complete sense to take advantage of this transport technology to help implement interfaces for cache agents.

According to their announcement, the new technology can simultaneously support heterogeneous cache agents, and even easily add local coherent caches to non-cache IP’s. This should make it easy to mix IP’s from different sources. Arteris offers the capability to add proxy caches to non caching clients to boost overall system performance. These proxy caches will fully participate in the coherency management process.

Another key element of the technology is the availability of multiple configurable snoop filters. By customizing the organization, size and associativity of multiple snoop filters SOC designers can improve the PPA of their designs.

This Arteris technology is scalable and configurable. Each cache agent can be configured to suit its own needs and to talk to the other members of the coherency system by using optimal interfaces. By using FlexNoC as its transport layer brings all the FlexNoC advantages to the implementation. Typical design concerns are area for interconnect resources, timing and power consumption.

One of the core tenets of the Ateris team is that ‘wires’ are very expensive relative to the cost of transistors in advanced nodes. This counterintuitive sounding assertion comes from looking at total system area needed for point to point interconnect. Very complex IP blocks typically need to talk to many other subsystems. Wiring up dedicated connections for all the subsystem that need to talk to each other would be prohibitive. Even if the blocks were connected this way, we would see that the utilization would be pretty low. Just as FlexNoC improves data transfer between blocks, it can be used as a building block for cache coherency. The same benefits apply – optimal utilization of system resources, configurability, etc.

With this technology Arteris is now offering unique enabling technology that is not available anywhere else. Nevertheless, it is compatible with coherency offerings from ARM and others. For Arteris, it shows a unique level of innovation and a willingness to go deeper into the design process to develop new products that solve significant design problems.


Stop FinFET Design Variation @ #53DAC and get a free book!

Stop FinFET Design Variation @ #53DAC and get a free book!
by Daniel Nenni on 05-20-2016 at 7:00 am

If you plan on visiting Solido (the world leader in EDA software for variation-aware design of integrated circuits) at the Design Automation Conference next month for a demonstration of Variation Designer, register online now and get an autographed copy of “Mobile Unleashed”. Such a deal!

Solido Variation Designer is used by 1000+ designers across 35+ major semiconductor companies to solve key production design challenges in memory, analog/RF, and standard cell design.

REGISTER HERE

Solido will be demonstrating the newest version of its software just released in March, Solido Variation Designer 4. Variation Designer 4 has advanced Solido’s state-of-the-art technology to tackle the latest variation-aware design challenges in nanometer processes including FinFET, FDSOI, low-power & low voltage.

The following demonstrations are available:

Solido Variation Designer for Memory
Full Chip Memory and Cell Level Statistical Verification
Solido Variation Designer delivers the most advanced and industry-proven technologies for statistical design & verification of memories:

  • Hierarchical Monte Carlo: Verify full-chip memories with perfect statistical accuracy

    • Statistically correct verification of replicated structures
    • Correct application of both global and local variation
  • High-Sigma Monte Carlo: Verify columns, bitcells, sense amps, and other memory blocks to high-sigma quickly and with perfect Monte Carlo and SPICE accuracy
  • Accurately verify production-sized designs (such as memory columns and critical paths)
  • Solve pass/fail, binary, and multi-modal output measurements
  • Efficiently debug high-sigma variation problems
  • Generate trustworthy high-sigma verification results

Solido Variation Designer for Standard Cell
Variation-Aware Verification of Cell Libraries
Standard cell designers use Solido Variation Designer to accelerate the verification of their standard cell libraries across variation. The key technologies are:

  • High-Sigma Monte Carlo: Monte Carlo & SPICE accurate high-sigma verification of standard cells

    • Fast and accurate verification to high-sigma
    • Accurately capture performance and power vs. sigma tradeoffs for the entire sigma range
    • Batch operation for creating customized library verification flows
  • Fast Monte Carlo: Fast, accurate statistical verification of standard cells
  • 3-sigma verification & corner extraction
  • Batch operation for creating customized library verification flows

Solido Variation Designer for Analog/RF and Custom Digital
Statistical & PVT Verification and Debug
Solido Variation Designer gives analog designers the ability to design with greater speed, accuracy, coverage, and insight than ever before:

  • Statistical PVT: Unprecedented accuracy and coverage across 3-sigma statistical variation and operating conditions
  • Fast PVT: 2-50X faster verification across corners & operating conditions
  • Fast Monte Carlo:Fast, accurate 3-sigma verification & corner extraction
  • High-Sigma Monte Carlo: Monte Carlo & SPICE accurate high-sigma verification and design of analog/RF and custom digital circuits.
  • DesignSense: Variation-aware sensitivity & design debugging.

There is also a Variation-Aware Design DAC Panel with ARM, IBM, Invecas, and Solido on June 6. Plus Solido is presenting at the TSMC DAC Invited Theater Presentations (Monday to Wednesday June 6-8) and the Samsung DAC Invited Theater Presentation (Monday June 6).

If you are not attending #53DAC you can attend the TSMC and Solido “Collaborate for Variation-Aware Design of Memory and Standard Cell at Advanced Process Nodes” webinar:

Date:
June 1, 2016
Time: 10am Pacific
Duration: 55 minutes

Click here to register!

Abstract:

Variation effects have an increasing impact on advanced process nodes, and at each, new sources of variation must be considered. Furthermore, increased competition is forcing tighter design margins to make high-performance, low-power, low-cost products. Designers must now do more variation analysis than ever to achieve these tighter margins, using advanced variation-aware technology for speed, accuracy and coverage to deliver competitive chips on schedule. This webinar will discuss on how TSMC and Solido collaborate to offer variation-aware design techniques for memory and standard cell with TSMC advanced processes using Solido’s new Variation Designer 4.


Internet of Things Tutorial: Chapter Three

Internet of Things Tutorial: Chapter Three
by John Soldatos on 05-19-2016 at 4:00 pm

Emerging and future Internet-of-Things (IoT) systems will increasingly comprise mulitple heterogeneous internet connected objects (e.g., sensors), which will be operating across multiple layers (e.g., consider a camera providing a view of a large urban area and another focusing on a more specific location within the same area). Likewise, IoT applications will have to collect and analyze information from multiple heterogeneous objects, or even to compose services based on the interactions of multiple objects. Dealing with multiple sensors and internet connected objects, at multiple levels, requires:

  • Uniform representation of IoT data and operations, preferrably including some standards-based representation for sensor data.
  • Flexiblity in reallocating and sharing resources (including sensors, devices, datasets and services).
  • Deploying and using resources (such as sensors) independently from a specific location, application or device.

These requirements are closely associated with the interoperability across IoT sensors and devices, which refers to the ability of two or more autonomous, heterogeneous, distributed digital entities to communicate and cooperate. In this definition of IoT nodes and elements interoperability, different entities are able to exchange and share both information and services, despite their major differences in programming platform, application context or data format. Furthermore, interoperability implies that their communication or collaboration does not require any extra or special effort from the human or machine leveraging the results of their collaboration.

The fulfillment of these interoperability requirements has been addressed by early sensor and WSN middlware frameworks, such as Global Sensor Networks and its X-GSN extensionwhich is part of our OpenIoT project. Furtheremore, standards-based implementations using Open Geospatial Consortium (OGC) standands have also emerged such as the Sensor Web project and the 52north project. OGC has introduced Sensor Web Enablement (SWE) as in aAn interoperability framework for accessing and utilizing sensors and sensor systems in a space-time context via Internet and Web protocols. OGC specifies a pool of web-based services, which can be used to maintain a registry/directory of available sensors and observation queries.

These services are built around the same web standards for describing the sensors’ outputs, platforms, locations, and control parameters, which are used across applications. Standards comprise specifications for interfaces, protocols, and encodings that enable the use of sensor data and services. The SWE standards are motivated by the following use cases:

  • Quick discovery of sensors, which that can meet certain requirements and constraints in terms of location, observables, quality or even ability to perform certain tasks.
  • Acquire multi-sensor information in standard formats, which can be readable and interpretable by both humans and software processes.
  • Access sensor observations in a common manner, and in a form easily customizable to the needs of a given application.
  • Management and handling of subscriptions in cases towards receiving alters upon the occurrance or measurement of specific phenomena.

Among the most prominent SWE standards-based modelling language are:

  • Sensor Model Language (SensorML), which provides sandard models and XML Schemas for describing sensors systems and processes, while at the same time providing information needed for discovery of sensors. It also describes the location of sensor observations and the processing of low-level sensor observations.
  • Transducer Model Language (TransducerML), which provides a conceptual model and XML Schema for describing transducers and supporting real-time streaming of data to and from sensor systems.
  • Observations and Measurements (O&M), which provides Standard models and XML Schemas for encoding observations and measurements from a sensor, both archived and real-time.

Likewise, SWE prescribes the following web services:

  • Sensor Observation Service (SOS), which provides interfaces for requesting, filtering, and retrieving observations, along with information about sensor system.
  • Sensor Alert Service (SAS), which provides an interface for publishing and subscribing to alerts from sensors.
  • Sensor Planning Service (SPS), which provides the interfaces for requesting user-driven acquisitions and observations.
  • Web Notification Service (WNS), which provides a standard Web service interface for asynchronous delivery of messages or alerts from SAS and SPS web services and other elements of service workflows.

SWE provides a very good basis for the implementation of interoperable applications based on open standards. It’s also a very good basis for understanding several IoT concepts and common services, such as discovery, location-awareness, data acquisitions, subscriptions, sensor/IoT systems, as well as the merits of standards and interoperability when it comes to building non-trivial systems. At the same time, it provide fertile ground for understanding and implementing semantics towards the vision of semantic sensor web.

The Semantic Sensor Web is based on the enhancement of existing Sensor Web models with semantic annotations towards producing sematic descriptions and enabling enhanced access to sensor data. The X-GSN project (part of Openiot.eu) provides a good platform for analyzing this process, since it enriches sensor descriptions (data/metadata) with semantic annotations. This is a very popular concept, which is increasingly become part of all non-trivial IoT systems (e.g., the emerging one M2M standard is enhanced with support for semantic annotations, which has been demonstrated in the scope of the H2020 FIESTA project). Indeed, semantic annotation ensure model-references to ontology concepts, thus enabliing more expressive and interoperabilty descriptions of sensor concepts.

SWE and the projects that implement it are mostly focused on sensors, sensor networks and their interoperability, without adequately addressing other IoT devices and aspects. Indeed, the rising complexity of IoT systems are posing additional interoperability challenges, including interoperability across whole IoT ecosystems, including interoperability across different devices, security and privacy mechanisms, things & services directories and more.

An in-depth discussion of IoT interoperability issues are provided in the scope of semantic interoperability position paper of the Internet of Things Research Cluster of the Alliance for IoT Innovation (AIOTI), which EU’s H2020 programme has recently funded entire projects (such as InterIoT, symbIoTe, bigIoT) aimed at providing interoperability across IoT ecosystems. This is particularly important as IoT interoperability is (according to McKinsey in its “Unlocking the potential of the Internet-of-Things report“) expected to be the source of the 40% of IoT’s market value in coming years. We will thus revisit the interoperability discussion in future posts of this series, when discussing IoT/cloud integration and IoT ecosystems.

View all IoT Tutorial Chapters


Internet of Things Tutorial: Chapter Two

Internet of Things Tutorial: Chapter Two
by John Soldatos on 05-19-2016 at 12:00 pm

In our Internet-of-Things (IoT) introduction we highlighted Wireless Sensor Networks (WSN) and Radio Frequency Identification (RFID) as two of the most prominent IoT technologies. Indeed, these two technologies can be considered as the forerunners of IoT. During the previous decade, IoT was in most people’s minds directly associated with WSN and/or RFID, and we can be sure that there is still is.

Continue reading “Internet of Things Tutorial: Chapter Two”


Internet of Things Tutorial: Chapter One

Internet of Things Tutorial: Chapter One
by John Soldatos on 05-19-2016 at 7:00 am

The Internet-of-Things (IoT) is gradually becoming one of the most prominent ICT technologies that underpin our society, through enabling the orchestration and coordination of a large number of physical and virtual Internet-Connected-Objects (ICO) towards human-centric services in a variety of sectors including logistics, trade, industry, smart cities and ambient assisted living. In a series of post, I will be presenting a set of internet-of-things technologies and applications in the form of a series of tutorial in nature post. The present post is the first introductory one.

For over a decade after the introduction of the term Internet-of-Things, different organizations and working groups have been providing various definitions. For example:

[LIST=1]

  • ITU-T (as part of ITU-T Y.2060) defines IoT as: “A global infrastructure for the Information Society, enabling advanced services by interconnecting (physical and virtual) things based on, existing and evolving, interoperable information and communication technologies.”
  • The EU and more specifically the EU Projects Research Cluster in the Internet of Things (IERC) gives the following definition: “A dynamic global network infrastructure with self-configuring capabilities based on standard and interoperable communication protocols where physical and virtual “things” have identities, physical attributes, and virtual personalities and use intelligent interfaces, and are seamlessly integrated into the information network.”

    Independently of the defintion, the exploitation and coordination of ICOs data and services is enabling a large number of novel human centric applications (e.g., Smart Cities & Communities, IoT in Healthcare, IoT in Manufacturing & Logistics, IoT Platforms & Ecosystems).

    The IoT paradigm is enabling the vision of pervasive & ubiquitous computing, which was back in the 90’s envisaged to be a direct consequence of the rapid prolifration of computing devices. Indeed, following the era of super-computing (where a few fat computing systems served many users (many-to-one)) and the era of personal-computing (where each user had its own personal device (one-to-one), we are already in the era where each human individual enjoys services based on multiple internet-connected computing devices (such as laptops, mobile devices, multi-purpose sensors, home gateways).

    Hence, the IoT revolution is indeed propelled by the exponential increase of the number of connected devices, which is (according to CISCO) estimated to reach 50 billion devices in 2020. This will mean that each of the estimated 7.8 billion people on the planet will use on average more than six devices. Nowadays, we have already crossed the point (back in 2005) where the number of people around the globe was equal to the number of internet connected devices.

    The proliferation of internet-connected-objects, empowers interactions both between devices and things, but also between things and people, thus providing unprecedented application opportunities. Indeed, people can directly connect to things (such as mobile phones, electronic health records etc..) via wearable sensors such as motion sensors, ECG (electrocardiogram) sensor and smart textiles. Likewise, things connect to each other e.g., as part wireless sensors networks (WSN), but also as part of a WSN’s interaction with other devices (gateways, mobile devices etc.). As another example car sensors, connect to intelligent transport systems and with sensors from other vehicles. These interactions are in most cases empowered by heterogeneous networking infrastructures, which provide ubiquitous high-quality connectivity, such as 4G/5G infrastructures.

    Along with the term internet-of-things, several similar/analogous technologies and terms have been introduced. These include for example:

    [LIST=1]

  • M2M (Machine-to-Machine).
  • IoE (Internet of Everything).
  • Cloud of Things.
  • Web of Things.
  • Cyber Physical Systems (CPS).
  • USN (Ubiquitous Sensor Networks.

    All these terms are very relevant (and in most cases overlapping) to IoT. Nevertheless, they have also subtle (but sometimes important) differences from IoT. We will illustrate these terms and their differences from IoT in following posts. In gerenal there are different viewpoints for IoT, and ΙοΤ experts approach IoT from different angles. For example:

    [LIST=1]

  • The “Things-Oriented” viewpoint focuses on technologies for the representation and use of the things e.g., RFID (Radion-Frequency Identification), NFC (Near Field Communications), WSN (Wireless Sensor Networks), Things connectivity technologies etc.
  • The “Internet-Oriented” viewpoint focuses on the internet and web aspects of IoT, such as the web-of-things layer for simplifying application development, IPv6 for internet connectivity and identification etc.
  • The “Semantics-Oriented” viewpoint focuses technologies for accessing and leveraging the semantics of IoT data and services based on semantic web technologies, reasoning technologies etc.

    Independently of one’s viewpoint about IoT and IoT technologies, any non-trivial IoT system is expected to comprise the following elements:

    • Sensors and Actuators.
    • Communication infrastructure between servers or server platforms.
    • Server/Middleware Platforms.
    • Data Analytics Engines.
    • Apps (iOS, Android, Web).

    These elements are evident in the functional viewpoints of the various architectural frameworks that have been introduced for IoT, and which we will be presented in following posts.

    IoT is gradually becoming very closely affiliated with cloud computing infrastructurdes and BigData infrastuctures (including data analytics frameworks ). These infrastructures provide IoT applications with the means of levveraging the scalability, capacity and reliability of the cloud, along with the data processing and analysis capabilites of BigData systems. In later posts we will be discussing both IoT/cloud integration and the topical subject of IoT analytics (including mining of IoT data streams). However, we will start with an illustrations of the “things”-oriented technologies that underpin IoT, such as RFID and WSN.

    Further Reading & Study:
    1) There are a number of videos illustrating the IoT videos and introducting some motivating functionalities. Some of these are listed below:

    2) One can read about the different viewpoints of IoT at:
    Atzori et al. / Computer Networks 54 (2010) 2787–2805
    3) The IoT/Cloud integration is explained at:
    Gubbi et al. / Future Generation Computer Systems 29 (2013) 1645–1660
    4) The IERC cluster books provide a wealth of information about IoT. The most recent book (authored in 2015) is titled”Building the Hyperconnected Society – IoT Research and Innovation Value Chains, Ecosystems and Markets –IERC Cluster Book 2015.

    View all IoT Tutorial Chapters


  • S2C tutorial and PROTOTYPICAL debut at DAC

    S2C tutorial and PROTOTYPICAL debut at DAC
    by Don Dingee on 05-18-2016 at 4:00 pm

    It’s been a busy few days here in Canyon Lake, and we’re ready to share exciting news in advance of #53DAC coming up on Monday, June 6[SUP]th[/SUP]. S2C is offering a technical program tutorial on “Overcoming the Challenges of FPGA Prototyping” followed by the launch of our latest book project, “PROTOTYPICAL”, including a field guide authored by S2C. Continue reading “S2C tutorial and PROTOTYPICAL debut at DAC”


    The Xiaomi Redmi Note 3 Significantly Improves Performance

    The Xiaomi Redmi Note 3 Significantly Improves Performance
    by Patrick Moorhead on 05-18-2016 at 12:00 pm

    The smartphone market has been experiencing many changes, some of those changes have included the slowing of the overall pace of growth. One of the remaining growth segments of the smartphone market right now remains the mid-range which is priced around $200. These phones have traditionally been ignored by the big players in the smartphone market or at least didn’t put up their best offerings. However, with the growth of the mid-range and the chips available to OEMs, the bar for mid-range phones has started to rise.

    One of the places that has been experiencing the most growth in the mid-range globally has been the Indian market which is experiencing significant growth and demand for quality mid-range devices. One of the devices that is looking to satisfy the increased demand in India is the new Redmi Note 3 which now includes Qualcomm’s new Snapdragon 650 processor. We wrote a technical review of the new phone and performed benchmarks. You can download this here.

    The Redmi Note 3 actually comes in two versions, with the China version having launched in November and the version for the rest of the world launching in India on March 3rd, 2016. The primary difference between these two models is that Xiaomi has swapped out the MediaTek Helios X10 for a Qualcomm Snapdragon 650. Changing SoCs is no small task, especially when you take into account that many other chips and the PCB end up changing as well. Xiaomi’s switch to the Snapdragon 650 changes the phone from an 8 core CPU to a 6 core CPU, but actually increases the CPU performance thanks to two A72 cores in the Snapdragon 650. The Snapdragon 650 also brings the Adreno 510 GPU to the new Redmi Note 3 and offers major performance and efficiency improvements over the previous model.

    In addition to having a new 6-core processor, improved GPU and overall efficiency increases the Xiaomi Redmi Note 3 also comes with a 4000 mAh battery which further increases the overall battery life of the phone, giving it some impressive longevity. Xiaomi didn’t stop there either, this phone also has an aluminum unibody design as well as an integrated fingerprint sensor for biometric security and ease of use. All of these features fit into a 5.5” sized phone with a 1080P display, which isn’t considered the highest resolution available, but this phone also isn’t a high-end phone which could be deceiving when you see all the other features of this phone.

    Based on our SoC testing where we tested many components of the SoC, we were able to determine that the Snapdragon 650 is one of the most efficient and highest performing processors in this class of devices. Xiaomi has provided significant value with the Redmi Note 3 with the Snapdragon 650 inside and has significantly raised the performance and battery life bar with our testing results. Many of the battery life and performance figures were astonishing to us as they surpassed our expectations significantly. Xiaomi really has a winner with the Redmi Note 3 and they are doing a great job of launching it in India first, a market where the demand for such devices is extremely high and a place where performance and battery life are extremely important.

    If you would like to see the results of our testing and the figures that supported our conclusions about the Redmi Note 3 download the full paper here.

    More from Moor Insights and Strategy