CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Chips on the road to deep learning

Chips on the road to deep learning
by Don Dingee on 01-20-2016 at 7:00 am

CES has been morphing into an automotive show for several years now. Chipmakers were pitching control solutions, infotainment solutions, then connectivity solutions. Phone makers pitched device integration. Automotive electronics suppliers pitched MEMS sensors and cameras. Now, with a lot of pieces in place, the story in 2016 has turned to a system-level solution.

And it isn’t self-driving vehicles. Every time someone says autonomous, an angel gets its wings – but for regulatory and legal and cultural reasons, large-scale deployment of autonomous vehicles is still generations away. Researchers will research, and that’s good, but the real money for auto companies and chip suppliers is in ADAS: advanced driver assistance systems.

The embedded chip companies – Freescale/NXP, Renesas, TI – were out in front for a while. These firms were all deep in automotive control with safety-qualified parts, so the leap to infotainment and connectivity wasn’t huge. Intel tried to tell an infotainment story, with so-so results. NVIDIA snagged headlines with its Tegra SoCs in a Tesla console win, and made headway in luxury display segments. Mobileye combined MIPS cores with embedded vision processing for dedicated EyeQ ADAS chips, coming from nowhere to 10 million cars. Qualcomm now wants in with Snapdragon 820 Automotive and Snapdragon 602A solutions tuned for cars.

Suddenly, there is a battle royale developing around who can create the algorithms for ADAS integration, from the vehicle to the cloud.

This isn’t exactly a new idea. Several years ago in a restaurant at the Venetian during CES, I met the folks from INRIX who were using GPS, mobile apps, and cloud algorithms to create real-time traffic mapping. They took technology deployed in commercial fleets and massaged it to consumer smartphone tastes, but still face difficulty monetizing beyond commercial space.

What if that capability can be integrated in cars, and not just luxury models but more mainstream offerings? Now this gets very interesting for a lot of chipmakers. One analyst, Wunderlich, writes that high-end ADAS content per car may eventually be an order of magnitude above a typical mobile device – for example, perhaps 8 to 10 cameras per car with associated vision processing.

But for that kind of broad adoption, ADAS has to fully integrate with the automotive control systems. That mandates a move from garden-variety mobile SoCs that burp at extremely inconvenient times to more robust, automotive qualified parts.

In many ways, this surge in automotive chip interest resembles the COTS push in defense technology years ago. Defense electronics suffered as more and more semiconductor suppliers bailed out of the mil-spec business, unable to sustain product development and manufacturing costs for a miniscule market dwarfed by consumer electronics. The response was the Perry Memo, allowing more commercial grade technology in select use cases.

Automotive under-hood applications are renowned for being even nastier than many mil-spec applications, with harsh environmental requirements and relatively low volumes that again sent many suppliers for the exit. Fortunately, foundries and processes caught up, so chip fabrication is not as big a barrier as it used to be. Safety-critical design is now a big hurdle.

So are the algorithms, and honestly that comes first – more on the safety-critical angle shortly.

NVIDIA has fired up their DRIVE PX 2, a massive 250W liquid cooled supercomputer in a box featuring the latest Tegra technology with 64-bit ‘Denver’ ARMv8 cores and Pascal GPU cores. NVIDIA thinks GPU computing is a fit for a deep neural net (DNN) cloud leveraging a common architecture running Cuda. Deep learning will be crucial to object recognition and motion tracking combined with mapping elements and other context from the cloud.

Mobileye says that makes for a nice demonstration, but to get to production algorithms takes a lot more doing. They are aiming for a proprietary mapping technology running on EyeQ chips called Road Experience Management (REM) which chews road information and localization at 10Kb/Km, compared to Google’s current HD technology with Gb/Km kinds of numbers. In theory, carmakers using EyeQ can flip on new vehicle software and build a “road book”.

CEVA is telling carmakers to hold their horses. Just as in mobile, where CEVA used more power-efficient DSP core IP to enable chipmakers to differentiate 4G LTE solutions, CEVA is creating IP for ADAS solutions. They have coupled their CEVA-XM4 vision engine with the open source Caffe open-source deep learning framework, creating a licensable solution for chipmakers called the CEVA Deep Neural Network.

Why is CEVA so confident to step right in the middle of this heavyweight fight? CEVA says they perform deep learning 3x faster, with 30x less power, and 15x less bandwidth compared to the NVIDIA approach. Some of the gain is efficient silicon in the XM4, but much of it is a floating-point to fixed-point conversion step that cuts bandwidth without sacrificing accuracy.

Compared to the Mobileye REM approach, CEVA says they are more open for customized algorithms in end-to-end solutions with tuned hardware and software. On top of that, the CEVA-XM4 is now certified to ISO 26262. That makes the XM4 currently the only licensable vision processor IP supporting ASIL B safety integrity level.

There is also mounting competition. We already mentioned Qualcomm. Samsung has created its own automotive division, and both Huawei and LG are also after automakers. There is the stealth-mode Apple automotive team doing who knows what. And there’s Faraday Future, a new firm with Tesla-like aspirations. This could put more chipmakers, or even automakers looking to self-design chips, in the market for licensable IP.

I’ve said several times that the secret of Qualcomm’s success has been a tight coupling between algorithm and silicon, with the examples of the Viterbi decoder and CDMA chipsets. After this initial phase of basic ADAS chips, we’re likely to see that the long-term winner in ADAS creates this same style of algorithmic coupling, adding cloud-based technology for an end-to-end solution optimized on ultra-low power silicon at the edge. (Qualcomm is also moving into deep learning with Zeroth.)

Can CEVA create an IP-based ADAS ecosystem quickly to compete with the headstart for Mobileye and NVIDIA and a new thrust from Qualcomm? Is CEVA’s bet on ISO 26262 certification well placed? For another perspective with a bit more detail on the Mobileye and NVIDIA pressers, Junko Yoshida had some excellent CES 2016 ADAS coverage in EETimes. The team behind Caffe also has a website.

More articles from Don…


Coventor ASML IMEC: The last half nanometer

Coventor ASML IMEC: The last half nanometer
by Scotten Jones on 01-19-2016 at 4:00 pm

On Tuesday evening December 8[SUP]th[/SUP] at IEDM, Coventor held a panel discussion entitled the “The last half nanometer”. Coventor is a leading provider of simulation software used to design processes. This is my third year attending the Coventor panel discussion at IEDM and they are always excellent with very strong panels and discussion.

Continue reading “Coventor ASML IMEC: The last half nanometer”


How to Secure the Internet of Things (IoT)?

How to Secure the Internet of Things (IoT)?
by Ahmed Banafa on 01-19-2016 at 12:00 pm

The Internet of Things (IoT) as a concept is fascinating and exciting, but the key to gaining real business value from it, is effective communication between all elements of the architecture so you can deploy applications faster, process and analyze data at lightning speeds, and make decisions as soon as you can.

IoT architecture can be represented by four systems:

[LIST=1]

  • Things: These are defined as uniquely identifiable nodes, primarily sensors that communicate without human interaction using IP connectivity.
  • Gateways: These act as intermediaries between things and the cloud to provide the needed Internet connectivity, security and manageability.
  • Network infrastructure: This is comprised of routers, aggregators, gateways, repeaters and other devices that control data flow.
  • Cloud infrastructure: Cloud infrastructure contains large pools of virtualized servers and storage that are networked together.


    Next-generation trends namely, Social Networks, Big Data, Cloud Computing, and Mobility, have made many things possible that weren’t just a few years ago. Add to that, the convergence of global trends and events that are fueling today’s technological advances and enabling innovation including:

    • Efficiency and cost-reduction initiatives in key vertical market
    • Government incentives encouraging investment in these new technology
    • Lower manufacturing costs for smart devices
    • Reduced connectivity costs
    • More-efficient wired and wireless communications
    • Expanded and affordable mobile networks

    Internet of Things (IoT) is one big winner in this entire ecosystem. IoT is creating new opportunities and providing a competitive advantage for businesses in current and new markets. It touches everything—not just the data, but how, when, where and why you collect it. The technologies that have created the Internet of Things aren’t changing the internet only, but rather change the things connected to the internet—the devices and gateways on the edge of the network that are now able to request a service or start an action without human intervention at many levels.

    Because the generation and analysis of data is so essential to the IoT, consideration must be given to protecting data throughout its life cycle. Managing information at this level is complex because data will flow across many administrative boundaries with different policies and intents. Generally, data is processed or stored on edge devices that have highly limited capabilities and are vulnerable to sophisticated attacks.

    Given the various technological and physical components that truly make up an IoT ecosystem, it is good to consider the IoT as a system-of-systems. The architecting of these systems that provide business value to organizations will often be a complex undertaking, as enterprise architects work to design integrated solutions that include edge devices, applications, transports, protocols, and analytics capabilities that make up a fully functioning IoT system. This complexity introduces challenges to keeping the IoT secure, and ensuring that a particular instance of the IoT cannot be used as a jumping off point to attack other enterprise information technology (IT) systems.

    International Data Corporation (IDC) estimates that 90% of organizations that implement the IoT will suffer an IoT-based breach of back-end IT systems by the year 2017.

    Challenges to Secure IoT Deployments
    Regardless of the role your business has within the Internet of Things ecosystem— device manufacturer, solution provider, cloud provider, systems integrator, or service provider—you need to know how to get the greatest benefit from this new technology that offers such highly diverse and rapidly changing opportunities.

    Handling the enormous volume of existing and projected data is daunting. Managing the inevitable complexities of connecting to a seemingly unlimited list of devices is complicated. And the goal of turning the deluge of data into valuable actions seems impossible because of the many challenges. The existing security technologies will play a role in mitigating IoT risks but they are not enough. The goal is to get data securely to the right place, at the right time, in the right format, it’s easier said than done for many reasons, Cloud Security Alliance (CSA) in a recent report listed some of the challenges:

    • Many IoT Systems are poorly designed and implemented, using diverse protocols and technologies that create complex configurations.
    • Lack of mature IoT technologies and business processes
    • Limited guidance for life cycle maintenance and management of IoT devices
    • The IoT introduces unique physical security concerns
    • IoT privacy concerns are complex and not always readily evident.
    • Limited best practices available for IoT developers
    • There is a lack of standards for authentication and authorization of IoT edge devices
    • There are no best practices for IoT-based incident response activities.
    • Audit and Logging standards are not defined for IoT components
    • Restricted interfaces available IoT devices to interact with security devices and applications.
    • No focus yet on identifying methods for achieving situational awareness of the security posture of an organization’s IoT assets.
    • Security standards, for platform configurations, involving virtualized IoT platforms supporting multi-tenancy is immature.
    • Customer demands and requirements change constantly.
    • New uses for devices—as well as new devices—sprout and grow at breakneck speeds.
    • Inventing and reintegrating must-have features and capabilities are expensive and take time and resources.
    • The uses for Internet of Things technology are expanding and changing—often in uncharted waters.
    • Developing the embedded software that provides Internet of Things value can be difficult and expensive.


    Some real examples of threats and attack vectors that malicious actors could take advantage of are:

    • Control systems, vehicles, and even the human body can be accessed and manipulated causing injury or worse.
    • Health care providers can improperly diagnose and treat patients.
    • Intruders can gain physical access to homes or commercial businesses
    • Loss of vehicle control.
    • Safety-critical information such as warnings of a broken gas line can go unnoticed
    • Critical infrastructure damage.
    • Malicious parties can steal identities and money.
    • Unanticipated leakage of personal or sensitive information.
    • Unauthorized tracking of people’s locations, behaviors and activities..
    • Manipulation of financial transactions.
    • Vandalism, theft or destruction of IoT assets.
    • Ability to gain unauthorized access to IoT devices.
    • Ability to impersonate IoT devices.

    Dealing with the challenges and threats
    Gartner predicted at its security and risk management summit in Mumbai, India this year, that more than 20% of businesses will have deployed security solutions for protecting their IoT devices and services by 2017, IoT devices and services will expand the surface area for cyber-attacks on businesses, by turning physical objects that used to be offline into online assets communicating with enterprise networks. Businesses will have to respond by broadening the scope of their security strategy to include these new online devices.

    Businesses will have to tailor security to each IoT deployment according to the unique capabilities of the devices involved and the risks associated with the networks connected to those devices. BI Intelligence expects spending on solutions to secure IoT devices and systems to increase five fold over the next four years.


    The Optimum Platform

    Developing solutions for the Internet of Things requires unprecedented collaboration, coordination, and connectivity for each piece in the system, and throughout the system as a whole. All devices must work together and be integrated with all other devices, and all devices must communicate and interact seamlessly with connected systems and infrastructures. It’s possible, but it can be expensive, time consuming, and difficult.

    The optimum platform for IoT can:

    • Acquire and manage data to create a standards-based, scalable, and secure platform.
    • Integrate and secure data to reduce cost and complexity while protecting your investment.
    • Analyze data and act by extracting business value from data, and then acting on it.

    Last word…
    Security needs to be built in as the foundation of IoT systems, with rigorous validity checks, authentication, data verification, and all the data needs to be encrypted. At the application level, software development organizations need to be better at writing code that is stable, resilient and trustworthy, with better code development standards, training, threat analysis and testing. As systems interact with each other, it’s essential to have an agreed interoperability standard, which safe and valid. Without a solid bottom-top structure we will create more threats with every device added to the IoT. What we need is a secure and safe IoT with privacy protected, tough trade off but not impossible.


  • How to Build a Deadlock-Free Multi-cores SoC?

    How to Build a Deadlock-Free Multi-cores SoC?
    by Eric Esteve on 01-19-2016 at 7:00 am

    We will precisely explain the meaning of deadlock in a modern, complex multi-core SoC. First, let’s take a look at the crash of the Air France 296, when a brand new Airbus A320 crashed during a demo flight on June 26, 1988. This Airbus 320, the first plane being completely automated, thanks to the FADEC flight system, was running a demo flight. The pilot decided to mimic a landing just above the airport, without effectively landing. The goal was to demonstrate how brilliant the plane was. Unfortunately, such a maneuver was not recognized by the FADEC and the flight system, being blocked, decided to reset itself. The problem was that in 1988, the reset/restart took 7 seconds, and during the reset time, the pilot was trying to put the gas on and climb (as he would have done with no problem on a plane from the previous generation). The result is printed on the picture below… At that time (1988) there was no SoC but just a 68020 processor, but what has happened is the equivalent to deadlock in modern SoC: the system becomes blocked, and there is no other way to escape this state except going to reset.

    Let’s jumpstart to 2016, electronic systems are now based on System-on-Chip integrating multiples –if not many- processor cores (CPU/GPU/DSP) and several dozen IP. As soon as you architect such a complex multi-cores SoC, you have to use and design an interconnect IP in order to exchange data between the multiple agents. It can be a bus-based or an internally defined interconnect IP, but the trend is to move to a commercial Network-on-Chip (NoC) IP, like NetSpeed SoC interconnect IP generated by NocStudio.

    Now a simplified case study where two agents are sharing the same interconnects, as pictured below. We can intuitively understand that deadlock are occurring when one agent (agent0) need to read a data to complete a task, which data is a response generated by the other agent (agent1) expecting to read the result of the operation done by agent0. In this case the situation is creating a dependency illustrated by the red arrows. The dependency is that read requests can complete only when they can issue a read response in the other direction. A deadlock occurs if buffers in both directions are full of read requests and there is no way to send read responses. Chicken and egg is a pertinent illustration of dependency… When deadlock occurs, the system is stuck and there is no other way to escape this state but to reset/restart it, assuming that architects have envisaged this type of event and integrated adequate test structure. If you remember the beginning of this blog, resetting a system can be dramatic, even if, in most of electronic systems, the risk is to lose data, not life.

    Before using NetSpeed’s NocStudio, SoC architects using home-made interconnects had to plan SoC validation campaign to detect potential deadlocks. The problem with such strategy is the awfully long lead time associated with the simulation. You have to run functional simulations and it may take weeks if not months of computer intensive validation to discover the first deadlock, even if it would take a few hours or days when the SoC is integrated into the real system. The entire process (identify deadlocks, design fix, run again simulation) could take as long as six months… which seems unacceptable in respect with the Time-To-Market (TTM) request.

    NetSpeed’s NocStudio is used at the architecture definition level, when the architect specify the communication protocol inside the SoC. NetSpeed IP achieves full deadlock detection and resolution by partitioning complex protocol transactions into the simpler sub-flows from one endpoint to the next. The deadlock in Figure 2 can be avoided by having separate resources for the read and read-response packets, by adding a virtual channel to the network to create an alternative read-response path. Machine learning algorithms are used to automatically learn the correct processing order in which sub-flows are processed and mapped to virtual networks.

    To detect protocol deadlocks, properties of all system components in terms of how they produce and consume network packets and these packets are inter-related to each other are required. Designers use a flexible formal language to capture the deadlock relevant properties of various system components. There are two ways dependencies can be specified in NocStudio. They can be implied within a traffic description, or they can be specified explicitly. Subsequently this information is also used to construct the network level deadlock-free NoC. That’s why NetSpeed uses the “Correct by Construction” slogan to describe the NoC generated by NocStudio.

    We know since decades that state machines can lock and put a chip in trouble, forcing to reset a complete electronic system. Since architects are defining always more complexes and heterogeneous SoC, the probability to introduce dependencies dramatically increase, with the direct consequence to put the SoC in deadlock. Even if deadlock can be detected by running simulations and fixed by design, this iterative process is way too long to comply with the TTM requirements. NetSpeed propose a solution: design an interconnect IP “correct by construction”, using NocStudio to generate Orion, a deadlock-free NoC.

    From Eric Esteve from IPNEST

    More articles from Eric…


    Decisive Floorplanning for Faster Design Closure

    Decisive Floorplanning for Faster Design Closure
    by Pawan Fangaria on 01-18-2016 at 4:00 pm

    Semiconductor design automation at system level is gaining its due importance today. It needs an effective, efficient, and seamless flow from system up to silicon. There is lot of effort going on for automating SoC design exploration at system level but that eventually stops at RTL; another level of flow automation takes over from RTL to physical implementation, and then to silicon.

    Traditional flows from RTL to physical design are inefficient because of long iterations involved between gate-level physical implementation and RTL; sometimes it becomes time-prohibitive or impossible to correct the design at RTL but live with inefficient work-around at gate-level. This is because RTL synthesis doesn’t have physical information until the gate level implementation. Synthesis tools and Floorplanning which ties together the front-end design at RTL level and back-end design at gate-level rely on inaccurate RC and physical models.

    As the design sizes are increasing and moving up to system level, it is utmost important and more reason for RTL-to-Gate level flow to become less iterative and be more decisive with accuracy at RTL level to the extent that design signoff can be done at RTL without waiting until physical implementation. A decisive floorplanning tool at RTL level can very well facilitate design prototyping, SoC full-chip planning, as well as IP signoff at RTL level.

    I like Mentor’snext-generation physical RTL synthesis and floorplanning tools including RealTime Designer[SUP]TM[/SUP] and other products. The floorplan is created at the RTL level based on the high-level RTL modules, macros, and design data flow. This enables high-level design optimization and accurate timing and congestion analysis. Incremental changes can be done to the floorplan to adjust PPA (power, performance, and area), congestion and routability.

    Physical hierarchies are honored and RTL partitions are correctly assigned within the physical boundaries of the appropriate partitions. Mentor’s patented technology is used to access the detailed netlist of each RTL partition to accurately time the design. The synthesis and floorplanning with real physical and timing information makes the flow reliable and predictable, thus reducing the number of iterations and time to design closure.

    The floorplan from RTL can be generated either automatically or incrementally by placing macros and assigning pins according to the constraints such as die size, macros, pin locations etc. provided by the user or supplied in a DEF (Design Exchange Format) file.

    The designer can perform fly line and connectivity analysis between regions before placing physical partitions. The regions can be shaped and sized automatically depending on the die size and utilization constraints. All advanced floorplanning features such as fences, blockage, physical guidance, and rectilinear boundaries for space optimization are enabled.

    To implement an efficient and testable design, scan insertion can be done during synthesis which enables early debugging of test problems and allows better scan architecture, reduced length of scan nets, and improved routability. Mentor’s tools are very efficient; a design with 10 million instances that might contain 1 million flops can be processed for scan analysis in 10 minutes and scan insertion in 20 minutes.

    The advanced physical modeling techniques used during RTL synthesis provides very close correlation to P&R system; timing correlation comes within 5% of Mentor’s P&R system. This provides confidence in synthesis and floorplanning QoR and naturally minimizes the back-end to front-end iterations. Some of the actual experimented results are can be found in a whitepaper at Mentor’s website.


    Mentor’s physical RTL synthesis and floorplanning tools are provided with a powerful integrated cockpit where cross-probing can be done easily between RTL and physical databases associated with different design views (logical, physical, or timing). The static and dynamic power map, congestion map, timing map, and hierarchical floorplan view can be debugged easily with different physical views. Any required changes can be done and synthesis re-run quickly to optimize the design.

    Mentor’s physical RTL synthesis and floorplanning tools come with high capacity and speed that can enable multiple parallel floorplan explorations with different recipes for design alternatives. The best configuration for implementation can be decided after analyzing results from all runs. The floorplan exploration report contains data such as TNS, WNS, instance-count, area, power, congestion, and wire-length.

    The tools and full-chip flat run methodologies as described above in Mentor’s RTL synthesis and floorplanning system provide up to 10x higher performance compared to traditional physical synthesis tools, and also equal or better QoR (area, timing, placement, and production quality floorplan). Designs up to 100 million gates can be easily accommodated.

    Arvind Narayanan has described the tools and methodology in detail with some impressive experimental results in a whitepaper at Mentor’s website HERE.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    Internet of Things 2015 Year End Review (3): IoT Opportunities and Risks Insights from Patents

    Internet of Things 2015 Year End Review (3): IoT Opportunities and Risks Insights from Patents
    by Alex G. Lee on 01-18-2016 at 12:00 pm

    New IoT Product/Service Development
    Even though the IoT is getting a huge attention recently the concept of interconnected billions of devices is not new and has been under development for over 10 years. Thus, there are a large number of related patented technologies that can be exploited for developing new products/services, and thus, new business for the emerging IoT market.

    The basic principle in the Blue Ocean Patent Strategy is to exploit existing patents to achieve the value innovation, and thus, to serve the customers in fundamentally different ways. For example, a company in the consumer electronics industry that wants to enter the connected car industry can exploit existing patents that cover the factors of the strategy canvas. By deciding which factors (that are covered by the existing patents) are really crucial, and thus, needed to raise and/or create, a new business model that changes the nature of competition away from the typical direction of the medical device industry can be developed. Patents regarding superior UI/UX, compact/portable design, robust wireless connectivity are the good candidates for the BLUE OCEAN FACTORS.

    Another way to achieve the value innovation is to integrate into the existing platforms. HBS Professor Iansiti, the author of “The Keystone Advantage,” suggested technological assimilation as a new engine for technological innovations in his article “Creative Construction.” In the technological assimilation frameworks, a core innovation that once provided stand-alone products or services for a specific market can be the building blocks for mass market generating innovations through assimilation to broader platforms that were not existed at the time of innovations. A good example may be the GPS technology providing LBS (location based services) applications for smartphones and automobiles. Another good example may be the DVR technology of TiVo that was integrated into digital set-top box platforms. By contrasting all possible value propositions from an existing platform to the potential value propositions provided by the inventive departures (novelty points) of the existing patent(s), the integration feasibility of the existing patented technologies into the platform can be found, and thus, the new integrated value propositions can be developed.

    Patents also can be exploited to identify new IoT (Internet of Things) product/service development opportunities by the scenarios analysis. The scenarios analysis can show potential interactions between the future users and the new IoT product/service via the specific usage of the product/service and behavior of the user under environments provided by the product/service functionality/offering. Thus, the scenarios analysis exploiting patent information can provide the new IoT product/service concept (e.g., specific benefits to the user, product design, product functionality and the technology for the product).

    Investment for New IoT Innovations/Startups/M&A/Joint Venture

    Patent information can provide insights regarding the state of the art of the IoT innovations. Thus, one can identify the potential further innovation R&D areas (“white space”) that can lead to new products/services development through the patent analysis. For example, in-depth analysis of smart home patents reviles that most of the patent applications are for the incremental innovations of the current market products/services. 13% of the total patent applications are for the innovations of the potential new market products/services: Adaptable Autonomous Smart Home System (The adaptable autonomous smart home system can recognize the contextual or semantic profiling of a person or place or devices (physical environment) based on sensed data by the IoT devices. The adaptable autonomous smart home system determines particular interpretation instructions (define particular IoT device control rules) that are associated with the particular physical environment and dynamically updates the control rules for changing physical environment); Self-Aware Self-Healing Smart Home System; Artificial Intelligence including Deep Learning Applications; Cloud/Big Data/SaaS; Robots; New UX for Smart Home System; Cross-industry Convergence; and Emotion-Aware Smart Home System. Patent information also can provide insights regarding a new statups to become play a leading role in the emerging IoT market (and thus, can be a good investment/M&A target) and joint venture opportunities.

    New Patent Monetization Model

    A strategically packaged patent portfolio is the collection of patents that the integrated value propositions of each patent of the portfolio target specific value propositions that are provided by emerging new products/services. The strategically packaged patent portfolio in alignment with the specific IoT business interests (e.g., smart home automation) can be developed using the existing patents and/or by strategically developing new patents. Development of the strategically packaged patent portfolio requires a deep understating of the IoT technologies, extensive experiences in patent analysis and development, and insights into the emerging IoT market. The strategically packaged patent portfolio can be used for monetization through patent sale, patent licensing, commercialization, spin-off, patent banking, and patent-backed financing. For detailed information, please visit http://www.slideshare.net/alexglee/best-practices-of-ip-patent-strategy-iot-internet-of-things-case-study.

    Potential Patent Disputes

    As we have seen in the smartphone market development, it is expected that the super-competition to preoccupy the leadership in the lucrative IoT market can lead to another round of patent wars. The post-smartphone patent wars, however, will be more extensive because of more extensive participation of players across several different industries. The post-smartphone patent wars will also be more complex because of the recent rapid change in legal environment and the learning curve from the smartphone patent wars.

    To assess the potential patent disputes risk of the IoT, patent landscapes of three major IoT applications – smart home, connected car, connected health (including healthcare/fitness wearable devices) – and IoT platform connectivity are researched. Followings summarize the potential patent disputes risk for each subfield of the IoT.

    Smart Home
    Interesting point in the smart home patent land scape is a large number of patents owned by two startups – Allure Energy (9 %) and Ecofactor (5 %) – that are not successful in the IoT smart home market compare to their marker leader competitors such as Nest Labs and Ecobee. A large number of patents owned by commercially unsuccessful companies can be the potential patent disputes risk to the smart home companies.

    Connected Car
    Interesting point of the connected car patent landscape is the large number of patents owned by two active patent monetizing entities – Omega Patents and American Vehicular Sciences and a suspected entity – AutoConnect Holdings. The large number of patents owned by the patent monetizing entities is a potential patent disputes risk for the connected car industry.

    Connected Health

    Interesting point of the connected health patent landscape is the large number of patents owned by the active patent monetizing entitiy – Empire IP LLC, which is a potential patent disputes risk for the connected health companies. Interesting point of the personal fitness/health care devices patent landscape is the large number of patents owned by many commercially unsuccessful companies and NPEs. Another interesting point is the small number of patents owned by commercially successful companies (e.g., Under Armour).

    IoT Platform Connectivity

    Interesting point of the IoT platform connectivity is the large number of patents owned by many active patent monetizing entities – InterDigital, Optis Cellular Technology, Intellectual Ventures, Innovatio IP Ventures, Adaptix (Acacia), Evolved Wireless LLC, Wi-Fi One and WiLAN. The large number of patents owned by many patent monetizing entities is a potential patent disputes risk.

    One can develop a strategic forecasting methodology to predict the possible emerging development of the post-smartphone patent wars and a strategic planning methodology to prepare for the emerging post-smartphone patent wars.

    For detailed information, please visit http://www.slideshare.net/alexglee/postsmartphone-wearables-iot-devices-patent-wars-strategic-forecasting-methodology and http://www.slideshare.net/alexglee/postsmartphone-wearables-iot-devices-patent-wars-strategy-development.


    More articles from Alex…


    Synopsys on the Future of Custom Layout!

    Synopsys on the Future of Custom Layout!
    by Daniel Nenni on 01-18-2016 at 7:00 am

    Analog and mixed signal design has received more than their fair share of attention since the mobile revolution and now that FinFETs are in production at the foundries I see that trend continuing. As a result this year there are some interesting things brewing in EDA, especially in the area of Custom Layout.

    Innovation in Custom Layout has been nowhere near as rapid as in other areas of EDA. Over the years the custom tools have been tweaked and tuned with a new feature here and there to enable the layout engineers to be as productive as possible while the industry sailed along with the tide of shrinking process nodes. But the waters are getting choppy. Driven by the new challenges that the latest process nodes are bringing, especially FinFET and multi-patterning, it appears that it’s time for Custom Layout to undergo a much needed disruption.

    That’s clearly the thinking over at Synopsys where Graham Etchells believes it’s time for Custom Layout to get into the 21st century. Graham is an EDA veteran with over 38 years in ‘the business’ and his involvement with custom layout stretches back to the late 1970’s with the introduction of the Calma GDS 1 systems. He has some interesting insights as to what’s really needed to address this new wave of challenges and has taken to blogging to crowdsource new ideas and approaches.

    Graham’s new blog “Custom Layout Insights” starts with a three part series called “We have come a long way” which is a very good read. It has always been my conviction that you must know how you got to where you are today to better decide where to go tomorrow. He starts this series in the late 1970’s with the first Calma GDS systems which was before my time. I didn’t arrive in Silicon Valley until the early 1980’s where the Calma GDS-II system was the defacto IC design standard. In fact, that is where term GDSII came from. That is also where the term tape-out came from since we used to stream the GDSII out to magnetic tapes when the design was finished.

    Last week Graham started another series “Hurricane FinFET – part 1.” which is a nice introduction to FinFET custom layout. Hopefully this is a 100 part series because there is a lot of ground to cover. And who better to cover it than the top EDA/IP company with leading edge FinFET IP in production all over the world? And yes all Synopsys IP designers use Synopsys tools including their more than 400 layout people. I know this by experience from the Virage acquisition. All of our engineers were told that they had to switch from Cadence to Synopsys tools in 30 days without schedule delays. Imagine the horror! Not only did the Virage people do it, they love using their own tools and the custom development and support that come with it, absolutely.

    About Graham Etchells
    I started in EDA before it was termed EDA. It was simply Computer-Aided Design back in 1977. I was working at GEC Traction in Manchester England (yes, I am a Brit) doing control gear for locomotives. It was all heavy duty relays and contactors back in those days. Then came the electronics revolution and with it came the first CAD system. It was a CALMA GDS1 system with green vector refresh displays and huge digitizers for entering the data. It was amazing what you could do with a Data General Eclipse computer and 16K (yes, Kilobytes) of core memory! Pretty soon I was running the CAD system, which at the time was one of the largest in Europe, if not the world. CALMA was expanding and I was recruited as an applications engineer. That was it; I was in EDA and have been ever since. I have held marketing and sales positions at Silvar Lisco and Neolinear and I have been chasing the holy grail of analog/custom layout automation ever since I ran marketing for Virtuoso at Cadence back in 1995. Past experience tells me we may never find the Holy Grail, but there is light at the end of the tunnel. Follow this blog and see how we at Synopsys are progressing.

    Also Read: Synopsys Vision on Custom Automation with FinFET


    IBM’s OpenPOWER Presence Was Felt Heavily At SuperComputing ’15

    IBM’s OpenPOWER Presence Was Felt Heavily At SuperComputing ’15
    by Patrick Moorhead on 01-17-2016 at 10:00 pm

    IBM is in the process of reinventing themselves as a company, changing how they see themselves, what they do as a company and how they want their partners and customers to view them. This is exemplified best in their mobile alliance with Apple, their Watson cognitive efforts, the sale of their chip fab to GlobalFoundries, the sale of their X86 server and networking division to Lenovo, and the creation of the OpenPOWER Foundation.


    IBM’s Brad McCredie, also OpenPOWER President, kicks off analyst meeting at SC15 (Photo credit: Patrick Moorhead)

    In the past two years since OpenPOWER’s creation, IBM’s mindshare in big HPC (high performance computing) designs and partnerships across the industry has increased. IBM’s partnerships created through the OpenPOWER Foundation allowed the company to work with companies they never had a chance to in the past, while increasing the overall relevance of IBM’s POWER architecture on a broader, global scale. The evolution of the POWER architecture and IBM’s vision for the future have been heavily shaped by the partnerships that they have created in the OpenPOWER Foundation and now we are starting to see a nice glimpse of exactly what these partnerships have returned.

    The creation of the OpenPOWER Foundation was built upon the thesis that the industry simply wasn’t providing enough performance with CPUs alone (including POWER and X86) to fill the compute need and that more competition and “accelerators” were required to shore the gap. The gaps that once existed in IBM’s HPC compute portfolio are starting to get filled with OpenPOWER ingredients. IBM is also enabling more complete HPC solutions to their customers to expand the scope of where HPC can be applied today and into the future. That includes cognitive computing, network data forensics and facial recognition, among the existing HPC applications like financial simulations, genomic analysis, oil and gas imaging and scientific computing. As you would expect, the new challenges and opportunities in HPC have led IBM and their partners in the OpenPOWER Foundation to look to GPUs, FPGAs and fixed function controllers as accelerators for these new, more complex, workloads.

    Last week at Supercomputing 2015 (SC15), the world’s premiere supercomputing trade show, IBM’s major announcements were around the acceleration technologies that utilize the OpenPOWER partnerships which allow for even more performance out of POWER-based HPC platforms. These accelerators unsurprisingly come from some of the biggest and earliest partners in the OpenPOWER Foundation, namely NVIDIA, Xilinx and Mellanox Technologies. I got the chance to meet with all three of these companies along with analysts Gina Longoria and John Fruehe and technologist in-residence, Jimmy Pike. We had some very interesting conversations to say the least.

    IBM highlighted their acceleration advancements with NVIDIA through NVIDIA’s GPUs and Tesla as well as their upcoming NVLINK interconnect which will be integrated into future POWER processors next year and beyond. However, IBM today has their own CAPI (Coherent Accelerator Processor Interface) which they use with all of their other accelerators in the OpenPOWER Foundation. This interface is great for many reasons, namely that it’s coherent and supports a very large ecosystem of players, far beyond what NVLINK can do and is really what enables OpenPOWER to work at its very core. NVLINK is faster than CAPI, but lacks the coherency of CAPI, which makes it less useful for broader applications and industry adoption.

    IBM also announced a strategic collaboration with Xilinx which was squarely focused around adding acceleration to datacenter applications through FPGA-enabled workload acceleration in conjunction with IBM POWER-based systems. As a part of the collaboration, IBM will work with Xilinx to utilize and integrate CAPI to create solutions that improve the overall performance in the data center with a specific focus on OpenStack, Docker and Spark software-driven data center architectures. This announcement also illustrates IBM’s belief that Moore’s Law is no longer meeting the needs of the HPC market and that accelerators and software are going to be needed to fill those gaps. Moore’s Law has moved dates twice, and recently to two and a half years, but I’m not convinced yet it can’t get back to every two years.
    While we’ve heard the ‘end of Moore’s Law’ many times, GPUs and other ASICs along with FPGAs have always shown to improve certain types of workload performance and their programmability has only improved, making them better multi-purpose compute platforms. Nevertheless, it is great to see that IBM is pushing the envelope of performance with partners like NVIDIA and Xilinx to deliver the fastest possible solutions to their customers as well as broadening the relevance of the OpenPOWER Foundation. Most of all, I like CAPI’s open, fast, coherent nature.

    Hardware without software is useless and IBM went out of their way to show real HPC workloads and applications accelerated by CAPI. OpenPOWER’s HPC offerings continue to grow with solutions that include IBM Watson, Spark, Edico Genome, gpudb, neo4j and Radar. All of these solutions utilize OpenPOWER and IBM’s accelerated platform to deliver accelerated applications in a fast and efficient manner.


    OpenPOWER accelerated applications on-display at SC15 (Photo credit: Patrick Moorhead)

    IBM has clearly shown that OpenPOWER is starting to flourish and is beginning to pose a pretty serious alternative threat to Intel’s high-end HPC offerings, including their Xeon and Xeon Phi family of processors. Intel has embraced accelerated computing with transcoding solutions, Intel Xeon Phi and their $16.7B Altera acquisition and are keen on filling sockets with their own silicon. They will partner to fill the rest. IBM and their OpenPOWER partners have already won some very big HPC deals and it wouldn’t be much of a surprise to see that momentum to continue forward and to bleed into smaller HPC deployments across different industries.

    More from Moor Insights and Strategy


    Maybe not the world, but schedules got eaten

    Maybe not the world, but schedules got eaten
    by Don Dingee on 01-17-2016 at 4:00 pm

    It has been almost five years since Marc Andreessen wrote the words, “Software is eating the world.” The premise of his essay in the Wall Street Journal in 2011 was pretty simple: the technology world has seen its intrinsic value shift from hardware to software. New all-software names have appeared on the list of high flying companies, and hardware firms have been forced to transform into a more software-centric mix Continue reading “Maybe not the world, but schedules got eaten”


    Intel to Focus on IoT and NOT Mobile?

    Intel to Focus on IoT and NOT Mobile?
    by Daniel Nenni on 01-17-2016 at 12:00 pm

    The Intel Q4 investor call was last week and as Brian Krzanich approaches his 3[SUP]rd[/SUP] year as Intel CEO a new synergistic corporate strategy is emerging:

    That strategy is also resulting in the evolution of our business model to focus on three key areas of growth: The Data Center, the Internet of Things, and Memory.

    DCG, IoTG and Memory, delivered nearly 40% of Intel’s revenue and more than 60% of Intel’s operating margin in 2015. Additionally, these three adjacent markets delivered $2.2 billion in profitable revenue growth in 2015 alone. As we look ahead to 2016, we will continue to build on that strategy.

    You should notice that Mobile, Foundry, and the FPGA businesses are not included here.

    Certainly after spending BILLIONS of dollars on mobile Intel is not going to announce that they made a mistake and are leaving the business, but they did, they most certainly are, and it is a great move by Brian. If you look at the top three smartphone providers (Apple, Samsung, Huawei) all are now industry leading semiconductor companies with SoCs as good as or better than what Intel was able to do with Atom. That leaves fabless giants Qualcomm, Mediatek, and a handful of smaller players to fight for the remaining low margin-low growth merchant SoC business.

    Bottom line: Intel Mobile will die a slow and silent death inside the Intel Client Computing Group.

    Once Intel mobile is dead can Intel Custom Foundry take another shot at the merchant SoC providers? Maybe, but the “Apple using Intel” rumor that is flying around again is not going to happen this year or next. The foundries are now building and ramping their processes for SoC design and I do not see Intel doing that ever. Apple was the driving factor for TSMC 20nm, Samsung 14nm, TSMC 16FFC, TSMC 10nm, and Qualcom is behind Samsung 10nm.

    As much as I would like to have another leading edge foundry in the mix I do not see Intel continuing down this bumpy road. Remember, this will be the third time Intel has gone in/out of the foundry business and this time it was started before BK became CEO so he does not own it. As I have said before, Intel should acquire fabless companies that are in emerging markets to better diversify and do what Intel does best, make chips! And this is what BK has done with Altera and will continue to do in IoT, my opinion.

    Speaking of Alltera, I have had many conversations with FPGA professionals and the consensus is that Intel will focus on integrating Altera FPGAs into existing Intel business units and not aggressively pursue the remaining mainstream FPGA market segments. If so, some serious cuts will be coming to Altera, absolutely.

    One final note, Intel 10nm was also not mentioned in the prepared statement which to me means there are more delays. In regards to process technologies (BEOL), TSMC 10nm can be considered a half node between Intel 14nm and 10nm. Based on conference papers TSMC 7nm BEOL will have a slight process advantage over Intel 7nm.

    The TSMC 10nm PDK 1.0 is out now meaning that if all goes well TSMC 10nm wafers will start production in Q4 2016 and be ready for the iPhone 7s refresh in 2017. TSMC 7nm will follow one year later and hit the iPhone 8 in 2018 (TSMC 10nm and 7nm will use the same fabs/equipment so 7nm will ramp much faster than 10nm).

    According to what I heard in the hallway last week at the SEMI Industry Strategy Symposium, Intel 10nm will be in full production in early 2018 and 7nm 2-3 years after that. The 2-3 years will depend on EUV which, according to a recent ASML presentation, is closer to 3 than 2 years.

    Bottom line: At 10nm TSMC and Samsung will officially take the process lead from Intel and it will continue through 7nm, my opinion.

    More articles from Daniel Nenni