CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

How a Connected Watch Will Change the Connected Car Business

How a Connected Watch Will Change the Connected Car Business
by Roger C. Lanctot on 05-18-2016 at 7:00 am

It’s amazing the amount of excitement being ginned up over connected cars. Analyst firms regularly publish estimates of hundreds of millions of connected cars on the road by 2020. It’s enough to make you believe it might happen.

If we even come close to those projections it will be a miracle given the disconnect between the wireless industry and the auto industry. Nowhere is this off-the-hook relationship more apparent than in Europe – where I am this week at Telematics Berlin.

On the cusp of the implementation of the eCall, emergency call, mandate in Europe, car makers and wireless carriers are left to ponder what they have created. Leave it to the wireless carriers of Europe to give the automobile industry the gift of the dormant SIM – a wireless module that is inert until and unless the user has a severe enough crash in their car.

Car companies like PSA, BMW, Renault, Volvo, Volkswagen and Mercedes that have already implemented proprietary eCall systems will find themselves adding a second dedicated and redundant eCall device to accommodate the European Union mandate, according to industry suppliers. This means that the device that was originally conceived to enable a multifunction platform encompassing safety, security and infotainment will be dedicated to a single function necessitating the embedding of multiple telecom modules.

It’s worth noting that in both Brazil and Europe a mandated SIM was “sold” to the industry as a platform for “value-added services” – ie. stuff that car companies could “monetize.” In both instances – Brazil and Europe – the promise has been left unfulfilled. (Brazil’s mandate has been indefinitely delayed while Europe’s mandate appears to be on track for 2018 implementation.)

Should a car equipped with a dormant SIM eCall experience a crash, the dormant SIM module will spring to life and call for help – provided it is within range of a compatible network. Our fingers will be crossed on that point.

The latest European development sure to please smartphone makers along with Apple and Alphabet is the decision by Three, Tesco Mobile and Vodafone to offer inclusive data roaming for smartphone plans. The plans vary – ranging from 2GB up to 12GB of inclusive data roaming. The offer will be a boon to drivers who prefer to use their phones instead of the embedded wireless systems in their vehicles.
According to an Engadget report the deal has been “introduced ahead of new legislation, drawn up by the European Commission, which will scrap EU roaming charges altogether in 2017. A stop-gap measure was introduced last month, limiting the fees that network operators can enforce abroad.”

The problem here is that wireless carriers and regulators generally regard the telecom device built into a car as an M2M or, more importantly, a B2B system. The carrier is treating the car maker and the car as the customer so it is considered wholesale business – not subject to the same rules as retail smartphone roaming regulations.

This means that the removal of roaming limitations for consumer phones will not apply to devices built into cars. It also means that built-in telecom modules are subject to termination by carriers in some parts of the world who prefer to shut those telecom modules off for good if the consumer does not extend the contract beyond the free period.

As a result of these unfriendly policies it is reasonable to conclude that wireless carriers are not overly fond of connected cars – with the possible exception of AT&T and Vodafone (and Orange, Verizon and Telenor). By the same token, car makers are not overly fond of wireless carriers or vehicle connectivity generally. Wireless service for cars is complicated, full of security risks, and expensive. But the onset of the co-called eSIM may change everything.

As demonstrated at the Mobile World Congress with a Samsung Gear smartwatch, the day may soon arrive when consumers can provision the carrier of their choice in their car with the help of a smartphone app. They will also be able to transfer that privilege to a second owner of the car.

http://tinyurl.com/js8ob54 – A New SIM – For a New Generation of Connected Consumer Devices


Illustration of carrier provisioning process for Samsung watch with eSIM. – SOURCE: GSMA

The effect of emerging eSIM technology will be to allow the consumer to add their connected car to their existing wireless plan with all that that implies regarding the use of data and roaming. How far off on the horizon is this dream? Maybe not as far off as you think, judging by the presentations and live demo of the provisioning process portrayed in the (above) link.

The eSIM solution opens the door to a carrier-independent world of vehicle connectivity capable of resolving the conflicts and confusion that currently characterize carrier-car maker interactions. If eSIM technology sees rapid and wide adoption, the visions of hundreds of millions of connected cars may indeed be realized – and even sooner than expected.

But lost in all this excitement may be the importance of capturing vehicle diagnostic data and enabling firmware over-the-air updates. The value of the embedded SIM lies in developing a lifetime relationship with the car and the customer and preserving and enhancing the driving experience after the sale.
Until now, car companies have obsessed over the cost of wireless service with little help from the carriers. Multiple efforts are now underway from the Open Mobile Alliance and the International Telecommunications Union to find ways to build strong common bonds between carriers and car makers.

These parties need not and must not be in conflict. The challenges must be overcome as we have seen what will happen in the absence of cooperation – cars will fail and be recalled, carriers and car makers will be blamed, customers will be lost and profitability will suffer.

With the provisioning of a telecom module embedded in a Samsung Gear smartwatch the carriers and car makers were given a vision of what could be. It is time to embrace that vision.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/ac…e#.VuGdXfkrKUk


Bulking Up of Design Data Calls for Version Control on Steroids

Bulking Up of Design Data Calls for Version Control on Steroids
by Tom Simon on 05-17-2016 at 4:00 pm

Even though design management systems are gaining popularity as a way to manage design data growth, they actually contribute to the problem of exploding data size. What we already know is that a linear increase in die size causes exponential growth in chip area, and that smaller feature sizes compound this effect in the same way. Additionally, larger teams are required for larger SOC projects with more components. These larger teams mean more users need copies of the design to do their jobs.

Conventional revision control systems give users copies of the design files in their workspace so that they can run design tools to modify the the design or run verification tools. Traditionally, creating and managing workspaces is difficult and painful at best. At its worst it can become a true nightmare. All kinds of application level techniques have been applied to try to make this process easier to run and manage.

Some approaches use de-duplication (dedup) to save space on servers, but it is necessary to create a full copy of the data before deduping can begin. Furthermore, after the data is copied to make a workspace, it then needs to be compared to other stored data on the fileserver to determine if there is duplicate data that can be consolidated. This amounts to a copy, two reads and a compare in order to reduce storage space. The bandwidth and compute penalty for this step is severe and can negate the advantages of the process, especially when dealing with terabytes of data.

File system links are also frequently used, but managing them can be troublesome. Using links in a directory to point to workspace copies of files is still an ‘application’ level solution to ‘service’ level problem. Links can become jumbled and create a web of file pointers that can be hard to parse. At the time a file needs to be modified the link has to be removed and the file data needs to be copied locally.

In reality often only a small number of files in any given workspace need to be modified, the vast majority are there for reading only. So what is called for is a robust and ideally transparent system for efficiently creating workspaces and allowing users to read and/or work on the files that are needed specifically for their task. Most importantly of all, the file operations must follow the permissions dictated by the design management system.

So, if you were starting from scratch, what would be the best design for a design management infrastructure? It would be based on Perforce or Subversion, or another standard revision control system. However, it would be differentiated by putting a fundamental understanding of the native revision control system into the file system itself.

Methodics has done just this with their WarpStor appliance. Yes, building a filer interface hardware unit is very unconventional by today’s standards. Thinking it through, it is not unlike what NetApp did with NFS, making it a service that is furnished through an appliance. Granted data management is a different beast, but there are many parallels.

Methodic’s WarpStor uses managed design data on existing fileservers, but presents it to each user as local data that complies fully with the design management policies and procedures. Methodics likes to call this Version Control on Steroids. It’s easy to see why. The amount of data across the network used for design data drops significantly. Network traffic and bandwidth consumption drop sharply. Users see higher levels of responsiveness. Best of all, administrators and users see the full benefits of design management, including versioning, permissions, releases, etc.

To learn more about the internals and implementation of WarpStore, you can read the white paper on “WarpStor – Version Control on Steroids” on their website, here.


The ASIC Business Model is Critical for the DIY and Maker Movements!

The ASIC Business Model is Critical for the DIY and Maker Movements!
by Daniel Nenni on 05-17-2016 at 12:00 pm

If you look back at the beginning of the ASIC business you will see that it was really a critical time in the semiconductor industry. It all began in the 1980s which coincidentally is when I started my career in Silicon Valley. General purpose integrated circuits ruled the market, forcing system designers to cobble together off-the-shelf chips to build their products. The ultra-competitive nature of the semiconductor industry really is what drove the ASIC business model to where it is today (billions of chips). With everyone using off-the-shelf chips it is much harder to differentiate their products and make a profit, right?

Today it is déjà vu all over again with the DIY and Maker Movements cobbling together chips for a wide range of IoT devices that could go into high volume production in highly competitive markets.

Back then, getting an ASIC done was a highly negotiated process since you really did not know the size and complexity of your chip from the outset. I know of several examples where rough design specifications were literally delivered on scraps of paper or the legendary cocktail napkin. Today it is even more complicated with vast amounts of third party IP to choose from and many different foundry process alternatives to evaluate.

There is a nice chapter on the history of the ASIC business in our book “Fabless: The Transformation of the Semiconductor Industry” in case you are interested. If you are a registered SemiWiki member you can get a PDF copy of the book HERE. You can also get a paper or Kindle copy on Amazon.com.

Another interesting thing to note is that one of the driving forces of the ASIC business back in the 1980s was a stall in the growth of the semiconductor industry and the ensuing layoffs, much like the one we are experiencing today, which brings us back to IoT and the DIY and Maker Movements.

To get the DIY and Makers started on their ASIC adventure, Open-Silicon has a very nice landing page for IoT ASICs with white papers (Slash Time-To-Market and Risks: IOT SoC Platforms, IoT SoC Platform Demonstration_Cortex-M Series, Industrial IoT System Demonstration) and a replay of a joint ARM and Open-Silicon webinar: “Can a custom ASIC revolutionize your next IoT product?”.


To to make things even easier, Open-Silicon now has a web portal to dramatically reduce turnaround time for ASIC quotations. I tried it myself and found it to be intuitive and quite easy to use (and you don’t have to talk to a sales person). Just submit your systems requirements including your choices for IP, packaging, manufacturing process, system voltage, and power constraints. And did I mention you don’t have to talk to a salesperson?

And if you are going to #53DAC this year here is what Open-Silicon has planned for you:

Booth Demonstrations:
IoT ASIC PlatformDemonstrates end-to-end communication between sensor hubs and a cloud platform through a gateway device. Depending upon the type of radio technology, the sensor hubs can be used outdoors, on the factory floor or inside a room. This Industrial IoT system setup is part of Open-Silicon’s Spec2Chip IoT Platform, which allows IoT ASIC designs to be evaluated at the system level.

28G SerDes Evaluation PlatformEnables the rapid deployment of chips and systems for high-bandwidth networks. The platform includes a full board with packaged 28nm test chip, software and characterization data. The chip integrates a 28Gbps SerDes quad macro, using physical layer (PHY) IP and meets the compliance needs of the CEI-28G-VSR, CEI-25-LR and CEI-28G-SR specifications.

HMC 2.0 Memory Controller ASIC IP PlatformAllows quick evaluation of the HMC technology and performance testing of the HMC links. Based on the Xilinx Virtex-7 FPGA, this platform includes a fully validated design that integrates an HMC controller exerciser functions.

2.5D SoC Solution PlatformDemonstrates a functional system-on-chip (SoC) solution featuring two 28nm logic chips, with embedded two dual core 1GHz ARM Cortex™-A9 ARM processors, connected across a 2.5D silicon interposer.

Paper Presentation:
Breaking Through “The Memory Wall” – HBM IP Subsystem
Tuesday June 7, 3:30pm – 5:00pm, Ballroom G (IP Track: Evolving IP Interconnects & Verification)

Poster Presentations:

Physical Planning of IO Interface for 3D Stacking of Packaged Devices
Monday, June 6, 5:00pm – 6:00pm, Exhibit Floor (Design/IP Track Poster Session)

Die Sizing Bound by Peripheral Bumps and IPs
Tuesday, June 7, 5:00pm – 6:00pm, Exhibit Floor (Design/IP Track Poster Session)

I hope to see you there!


Army of Engineers on Site Only Masks Weakness

Army of Engineers on Site Only Masks Weakness
by Jean-Marie Brunet on 05-17-2016 at 7:00 am

Hardware emulation was conceived in the 1980s to address a design verification crisis looming on the horizon. In those days, the largest digital designs were stressing the limits of the software-based, gate-level simulator that was the mainstream tool for the task.

It was anticipated and confirmed in short notice that adopting hardware in the form of field reprogrammable devices to perform functional design verification would subdue and control what was becoming an intractable problem. Not only for the largest designs of the time, but it would provide also a path to keep up the advantage with the design size growth into the future.

Another major benefit inherent to the adoption of hardware to verify a design-under-test (DUT) was its ability to test the DUT with live/real traffic, albeit with a caveat. The fastest speed of early emulators was in the ball park of 5MHz, not sufficient to keep up with real traffic clocking at 100MHz. The problem was solved by inserting speed adapters – conceptually, they were first-in-first-out (FIFO) buffers – between the physical world and the I/O of the emulator.

The two advantages, though, came with a steep price. Not a purchase price, since it was well known that accelerating time to market had a profound, positive impact on profits that would offset the expensive acquisition of an emulator. The real price was the rather time-consuming, cumbersome, and frustrating task of mapping the DUT onto the FPGAs.

The problem arose from the FPGA’s limited number of I/O pins – known as the Rent’s Rule – that complicated the mapping of the DUT onto the programmable devices. To cope with the severe limitation, several interconnection schemes were devised over time, from nearest neighbor, to full and partial crossbars, synchronous and asynchronous time pin multiplexing. None eliminated the problem.

By the mid- to late-1990s, two leading suppliers ditched commercial FPGAs, and replaced them with custom devices implementing custom emulation architectures. These were thought to alleviate and ultimately eliminate the bottlenecks. And they did.

After a decade of successful adoption of custom-based emulators, the rising interest in FPGA prototyping platforms proposed by some vendors not only for early software validation, but as an alternative to custom-based emulators seemed to change the landscape.

This is not the case. The problem remains, and is now worse.

An FPGA prototype trades off features and capabilities in favor of attractive cost advantages and fast speeds of execution. Both are requirements for software validation in a large team of software developers where each designer may be assigned one copy of the prototype. The long setup-time is still a serious problem. Given today’s SoC complexity reaching into the multiple hundreds of million gates, if not billions, it may extend to several months and never a week or less.

What would a supplier of FPGA emulators then do?


Compensate the weakness by committing an army of engineers, partly R&D personnel and partly application engineers. They provide on-site support, work side by side with lead design engineers, and assure that the customer’s designs are ready for emulation often after few calendar months. This significant involvement is mandatory, not only at the time of an evaluation before purchasing the emulator, but also during the initial adoption. It may also extend and increase the bandwidth in production use.

It may seem that as long as the commitment is shouldered by the emulator vendor, the customer may enjoy the benefits without penalties. Again, this is the wrong perception.

Being so dependent on the supplier is worrisome for three reasons:

First, requiring involvement of lead design engineers, scarce resources in any IC design organization, for the design bring-up in the emulator is a proposition few can afford.

Second, the sheer volume of engineers required for deploying an FPGA emulator, if available for hire, questions the cost advantage.

Third, a company that must rely on an emulator vendor’s army of engineers for a mission critical task gives the vendor excessive leverage that could be reduced at any time.

Instead, the company needs to rely on its own engineers to effectively run the emulator. That means setting up and training an internal support organization. FPGA-based emulators, however, would add a significant financial burden to implement such proposition.

In fact, long gone is the day when mapping a DUT onto the FPGAs in the emulator was slow, unwieldy, and aggravating. Today, custom-based emulators are scalable, efficient, can be deployed with a minimum of resources, minimal design knowledge and limited involvement from the supplier. Choosing between the two seems like a straightforward decision.


SRAM Optimization for 14nm and 28nm FDSOI

SRAM Optimization for 14nm and 28nm FDSOI
by Daniel Payne on 05-16-2016 at 4:00 pm

I’ve done SRAM and DRAM design before as a circuit designer from 1978-1986, but in 2016 there are so many more challenges to using 28nm and 14nm on FDSOI technology. One way to keep abreast of SRAM design is to read conference papers, so I just finished a paper from authors at STMicroelectronics and MunEDA presented at the IEEE IRPS 2016 (International Reliability Physics Symposium) held April 17-21 in Pasadena, CA. The paper is titled, BTI Induced Dispersion: Challenges and Opportunities for SRAM Bit Cell Optimization.

Let’s start with a 6 transistor SRAM cell schematic showing a Word Line (WL), Bit Lines (BL), PMOS pull-up transistors (PU), NMOS pull-down transistors (PD) and NMOS transfer gate transistors (PG):

BTI
One reliability challenge is that the Voltage Threshold (Vth) of both PMOS and NMOS transistors will change over time as the devices are turned on and off. For PMOS transistors the effect is called Negative-Bias Temperature Instability (NBTI) and for NMOS transistors its called Positive-Bias Temperature Instability (PBTI). With NBTI and PBTI the value of Vt increases over time and mobility decreases, causing the transistors to become slower.

Memory Challenges
Because an SRAM bit cell is very sensitive to device matching we really need to know how the Vth distribution changes over time. The minimum VDD supply level required to ensure proper write and read cycles is called Vddmin, and it’s another critical metric for users of an SRAM instance.

Test Structures
Engineers at STMicroelectronics designed a test structure in 14nm FDSOI to characterize SRAM bit cells, then measured the difference in MOS values with fresh and aged parameters for both NMOS and PMOS transistors:


Correlation plots between fresh and aged parameters

For PMOS transistors we find that the spread is larger after aging than NMOS transistors, while the parameters are only shifted for NMOS transistors. The next chart shows how the Vth values are shifted for different stress times on the left, and on the right it shows that Vth shift is uncorrelated with the initial Vth.

The Vth distribution is still normal as shown below with devices i and j starting out at plus 3 sigma Vth and minus 3 sigma Vth, respectively. The worst Vth after a BTI stress is directly related to its initial Vth.

A BTI model can be developed to show the Vth shift and its final distribution.

Using this BTI model and then running Monte Carlo analysis on a 6 transistor SRAM bit cell to measure the Static Noise Margin (SNM) during write and read modes produces the next chart, along with a Worst Case Analysis (WCA) using the WiCkeDsoftware from MunEDA:

Sensitivity analysis of local Vth parameters at a 5 sigma target for SNM reveals which device parameters are the major contributors. The contribution of ageing induced dispersion is moderate compared to the one of fresh Vth location variation, but is significant.

  • Fast pull-down R
  • Slow pull-up R
  • Fast pass-gate R

SRAM Bit Cell Sizing
For optimal bit cell design we need to optimize SNM, Write Margin (WM), Iread (drive current during read) and Isb (leakage current). Fortunately for us there is an automated Yield Optimizer (YOP) that can attack this problem, instead of using manual, iterative efforts. Here’s a screen shot of using the yield optimization from the WiCkeD tool for: Area, Iread, Isb, SNM, WM


Note how both WM and SNM grow in robustness during optimization, whereas their nominal values grow only little or even shrink. Such mixed effects between spec robustness, nominal value, and device geometries are common and are the reason why a yield optimizer has to run high sigma analysis repeatedly for all specs. Short runtime and high accuracy of the worst-case analysis are key for SRAM yield optimization.

You can even ask the optimizer to tune transistor sizes for low leakage, low power or high performance.

Summary
SRAM design is a tricky thing to use manual efforts and then get optimal results because of physical effects like BTI. STMicroelectronics has worked with MunEDA to create a methodology that automates transistor sizing of their SRAM bit cell while taking into account BTI effects on Vth values. This optimization helps create higher-yielding SRAM IP for use in SoCs built with FDSOI.

Related Blogs


According with ST, SiC Power Devices will Accelerate Automotive Electrification

According with ST, SiC Power Devices will Accelerate Automotive Electrification
by Eric Esteve on 05-16-2016 at 10:39 am

Silicon Carbide (SiC) is a very interesting material. If you find in nature the mineral moissanite, it will be only minute quantities in certain types of meteorite. The moissanite physical properties are very similar to these of diamond, in term of density and abrasive power. In the semiconductor industry, SiC is characterized by wide Bandgap, high breakdown voltage and high carrier drift velocity at large electric fields (saturation velocity).

These properties lead to fast response time of SiC devices and the ability for SiC MOSFET to support high-power applications better than the Silicon based equivalent devices, Insulated Gate Bipolar Transistor (IGBT). According with ST, we are talking about SiC diodes and transistors capable of operating well above the 400V range of today’s electric and hybrid drivetrains!


In Electric Vehicle (EV) and hybrids, where better electrical efficiency means greater mileage, ST’s latest silicon-carbide (SiC) technology enables auto makers to create vehicles that travel further, recharge faster, and fit better into owners’ lives. ST is among the first to present new-generation rectifiers and MOSFETs for all the vehicle’s high-voltage power modules, including the traction inverter, on-board battery charger, and auxiliary DC-DC converter.

The smaller SiC diode and transistor structures present lower internal resistance and respond more quickly than standard silicon devices, which minimize energy losses and allow designers to use higher switching frequencies for more compact designs. When using SiC MOSFET, power losses in the inverter can be reduced up to 80% at light or medium loads compared with Si IGBT. The EV and hybrids vehicle adoption in the real life will certainly benefit from innovation like SiC devices. Because SiC-based solution offers highly robust intrinsic-body diodes, eliminating the need for the freewheeling diodes necessary with IGBTs, smaller and lighter power unit with lower cooling requirements, the overall solution is smaller and cheaper.

ST is committed to support major carmakers and Tier-1 with silicon carbide technology for high power devices requirements and the company SiC devices have demonstrated superior performance and reached an advanced stage of qualification. Customers are preparing to launch new products as soon as in 2017. ST has developed the industry’s most advanced processes to fabricate SiC MOSFETs and diodes on 4-inch wafers. In order to drive down the manufacturing costs, improve the quality, and deliver the large volumes demanded by the auto industry, ST is scaling up its production of SiC MOSFETs and diodes to 6-inch wafers, and is on schedule to complete both conversions by the end of 2016.

The Automotive industry has the most stringent quality requirement (together with aeronautic) and ST has completed the qualification process for 650V SiC diodes to AEC-Q101. The company announce the qualification of the latest 650V SiC MOSFETs and 1200V SiC diodes in early 2017, when the 1200V SiC MOSFETs will be AEC-Q101 qualified by the end of 2017.

Statistics for the worldwide production of vehicles (car and commercial) is 90 million in 2015. If the adoption for EV and hybrids is growing as it did last year with 60% year to year growth, the prediction calling for 35% of EV and hybrids to be produced by 2040 sounds realistic. We can easily evaluate the impact of such production level in our day to day life, especially for those leaving in a large city surrounded by car pollution. 35% of 100 million vehicles (at that time) make 35 million. If we evaluate an average mileage of 12,000 km (*) and an average oil consumption of 8 liters by 100 km, we come to a total of 35 million x 1000 liters, or 35 billion liters of oil which will NOT be consumed! At least not being consumed in a car motor, generating direct pollution…

* To US readers: sorry to use MKSI or International Unit System…

That’s why technologies like silicon carbide are so important, becoming the enablers for changes in the human being behavior. Thanks to wide bandgap SiC devices, offering lower energy losses, supporting higher voltages and operating faster than Silicon based IGBT MOSFETs.

From Eric Esteve from IPNEST


DAC 2016 – Register Now

DAC 2016 – Register Now
by Bernard Murphy on 05-16-2016 at 7:00 am

DAC is again going to be in Austin (reason enough to go), from June 6[SUP]th[/SUP]-8[SUP]th[/SUP] for the main event. A number of events caught my eye:

  • Monday AM – custom hardware for algorithmic trading. If you want to know more about FinTech (technology for finance) this could be for you
  • Another Monday morning session on Linux porting, bring-up and driver development including use of virtual platforms in this task
  • Tuesday AM – a session on biologically inspired electronic design (a favorite topic of mine)
  • Wednesday AM – verifying security aspects of hardware
  • Wednesday PM – how to verify (cut?) the Gordian knot of system complexity
  • Wednesday PM – architecture and design for Automotive ECUs

There are also some great keynotes and SkyTalks (again a personal selection):

  • Monday keynote: What it takes to enable securely connected, self-driving cars (Lars Reger, NXP)
  • Monday SkyTalk – Wireless implantable microsystems (in the brain) Ricki Muller (UCB and Cortera)
  • Wednesday keynote: The challenge to develop truly great products (Mark Papermaster, AMD)
  • Wednesday Skytalk: Security at different layers of abstraction (Brian Payne, Netflix)
  • Thursday keynote: Learning and reasoning for autonomous robots (Peter Stone, UT Austin)
  • Thursday SkyTalk: Biological electronics – merging life’s transistors with silicon (Kenneth Shepherd, Columbia U, NY)

More articles by Bernard…


Apple Smart Home Artificial Intelligence Insights from Patents

Apple Smart Home Artificial Intelligence Insights from Patents
by Alex G. Lee on 05-15-2016 at 8:00 pm

US 20160132030 illustrate a smart home system for automating operation of the smart home devices (e.g., thermostats, lighting devices, household appliances, etc.) based on aggregation of individual user routines.

User mobile devices and smart home devices can incorporate pattern detection logic to identify patterns in the user’s behavior (e.g., going to particular places at particular times or invoking particular operation functions of a smart home device at particular times). A coordinator (e.g., user smartphone) can receive information about detected patterns and analyze the information to detect an aggregate pattern.

Based on the detected aggregate pattern, the coordinator can identify the operational behavior to automate (e.g., turn off the lights when the last user goes to bed) and implement the automated behavior by establishing the automation rule that reflects the detected aggregate pattern.

The automation rule can specify an action to be taken by the smart home device. For example, the automation rule can specify that a porch light is to be turned on if an outside ambient light sensor detects a light level below a threshold or at a specific time each night.

As another example, the automation rule can specify that a heating (or cooling) system is to be turned on to adjust the temperature of the house to a target temperature. The coordinator can analyze the pattern data of the user routines to detect aggregate patterns across the users. Any pattern of behavior of an individual can be inferred by the machine learning algorithm based on inputs indicative of the individual’s location and/or activity at various times.

US20140136451 illustrates an artificial intelligence (AI) system to determine preferential smart home device action associated to a specific user. The AI system associates observed user behavior with output of the machine learning process that is derived from attributes observed at the specific smart home device, aggregated attributes from a number of other smart home devices and prior knowledge. The AI system determines a preferential smart home device action based on results of the associating.

US9303997 illustrates an AI system to predict the future user behavior using the machine learning process based on the user-specific data.

Reference: IoT Smart Home Patents Data 2Q 2016


Future Connected Cars 2016

Future Connected Cars 2016
by Roger C. Lanctot on 05-15-2016 at 4:00 pm

There are times when a speaker for a large corporation clearly breaks from the cold corporate talking points and suggests something that you just know his managers will consider cringe-worthy. Haden Kirkpatrick, described on the event agenda of the Future Connected Car event in Santa Clara as “director of strategy and marketing” for Esurance, had one of those moments yesterday.

Haden said that what might emerge from the confluence of usage-based insurance and automated driving is a scenario where drivers will only pay for insurance when their hands are on the steering wheel. This simple yet mind-blowing comment almost slid by without notice – but I noticed and pressed him a bit and he confirmed that that was indeed what he was suggesting.

Upon further review I come to discover that Haden had forgotten his business cards. (I obtained his email address and found him on LinkedIn.) On LinkedIn Haden describes himself as an “innovator, futurist, technologist, strategist, mobile guru, entrepreneur, martial artist, guru.” Somehow I suspect he is also a sensei. There were no details regarding his role at Esurance, which left the impression that he may be an internal consultant – but let’s assume he has the corporate position described on the event agenda.

The concept he described is several leaps ahead of anything currently proposed or available in a commercial solution. I mentioned what he had described to another industry colleague attending the event who had not seen the presentation and he claimed that the proposition was precisely what Hakan Samuelsson, Volvo Cars president and chief executive officer, had proposed last year – that Volvo would assume full responsibility for the safety of drivers of its self-driving cars when that technology becomes available.

Samuelsson may have made that commitment – one which has so far gone unmatched by any other car maker – but he never suggested the customer will only pay for insurance when their hands were on the steering wheel. This powerful concept introduces the prospect of an incentive for the driver to take his hands off the wheel at every opportunity and keep them off the wheel. This will represent a huge boost to the trust necessary to convince drivers to let go.

I can’t help but think that the executives in the audience from Allstate, Esurance’s parent company, were shaking or burying their heads in their hands at Haden’s words while the State Farm attendees chuckled to themselves. No insurer will be racing to deliver such a value proposition – though car makers may.

But Haden highlighted a critical issue and question for insurance companies underwriting cars with autopilot capabilities, such as Tesla’s Model S. When such cars are in autopilot the software has become the driver of the car and, therefore, the “driver” should no longer have to pay insurance at those times. It makes perfect sense, but the technology in the market has not caught up to this value proposition.
It also raises the question as to whether insurance companies even want to insure cars with self-driving technology. Currently, as noted by Haden, all underwriting is based on historical data. Cars with autopilot have no history, so there is no basis for underwriting the risk – and no reason not to offer coverage.

One could argue, as Tesla Motors CEO Elon Musk has, that cars with autopilot are safer than those without. Judging from multiple Youtube videos (drivers in the back seat!) insurers might conclude otherwise. But no driver with a car with self-driving technology has yet reported an increase in their insurance rates.

So the question is – do insurers like Esurance have an interest in or an obligation to foster the adoption and use of autopilot? Should customers be rewarded for surrendering driving to the car computer? It’s starting to look that way.

Thus far, insurance companies have been more or less silent on the issue, and insurance discounts for advanced safety features remain difficult to come by. Maybe it’s time for the industry to listen to an Esurance futurist, guru, and think more deeply about how self-driving technology is altering the insurance landscape. Insurers may well hold the key to the broader adoption of these systems and the reduction of the 100-fatalities-a-day carnage on U.S. highways. Yes, I know, it’s time to turn in the direction of the skid.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


When will Internet of Things really arrive?

When will Internet of Things really arrive?
by Sudeep Kanjilal on 05-15-2016 at 12:00 pm

A lot of ink has been spilled regarding the impending tsunami of Internet Of Things (IOTs). It is certainly an interesting topic, and not just for geeks. As the next generation computing platform, IOT will see an explosion of connected device (wearables, intelligent refrigerators, smart thermostats, etc.), currently estimated to be at least 10X the ecosystem of smartphones (this tech generation’s ecosystem). With prices of a basic sensor-equipped computer already down to sub $5 level, it will reach the sub $1 level in couple of years than then – boom, ubiquity.

However, lets first start by clearing some of the basic misconceptions regarding IOT, before we can get to answering the core question. IOT is not really a bunch of ‘things’ – its a software-based ecosystem (think ‘stack’) – that enables people to ‘plug into’ a universe of connected active devices, that, for most part, are communicating with each other, while occasionally providing data/inputs to humans and taking instructions/decisions.

The key to understanding when IOT will become real, is therefore, an exercise in predicting when the software stack will be ready. And that begs the obvious question – how do we identify the components of this ‘stack’?

The ‘Stack’
Lets take a simplified view – just 5 ‘layers’. Start with UI, then Applications, then Messaging/Communication, then Data and finally Infrastructure. And then lets look into the trends/trajectories in each of these 5 layers, towards that ‘software-based ecosystem of things’. Once we see where we are going with these components, we will better understand the progress of this entire ecosystem.

The Consumer UI is obviously the most exciting one, and perhaps the most critical one. Each computing revolution fundamentally changes the ‘UI’ – think mainframe to Mini to PC and smartphones (people old enough will remember leaving shoes outside an air-conditioned room to work on mainframes). IOT will need a leap in the interface capability and mechanism over the smartphone based UI design, and leading firms are currently experimenting with multiple approaches – augmented reality, VR, voice, hand gestures, etc. While it is difficult to accurately predict which one of these will win in next 5 years, or if something entirely new will emerge from a start-up, what is clear that the computing requirements, coupled with size requirements of the user interface, makes it a difficult challenge. This layer will probably be the last shoe to drop and complete the picture.

The Consumer (and Enterprise) Application layer is probably the easiest, as the current technology is enough – but it will be heavily influenced by the business use cases, which in turn, will flow from the user interface capability. Think Uber and Tinder to understand how application and business use case are driven by the UI capabilities of the system. And it is this layer that will then drive the future product cycle evolution, where most start-ups will be born in the next decade.

Messaging and Communication layer is also ‘there already’, for the most part. Of course, it is most advanced in retail commerce, and lot needs to be done for other ‘verticals’ like healthcare, automotive, etc – but as the business use cases develop in the above layer, this layer will keep pace in terms of new standards. Payments is an important part of this layer, which while very well developed, will still need new/innovative messaging, processing and business models to fit into the new use cases being developed – just like Payments had to evolve with the previous PC based model (think eCommerce). However, this is not a stumbling block – the basic capabilities exists.

Data Layer and Infrastructure (think cloud/IaaS) is, as always, most critical layer for this entire ecosystem to work smoothly, and deliver the business user cases that will drive the economics. Here, AI, Deep Learning, Machine learning and Big Data have made tremendous progress over the past 5 years, so it is ready for prime time now.

What then, next?
So, we come back to the last shoe to drop – the UI layer. With everything already ready from the ecosystem perspective (i.e. the remaining stack), we are probably looking at a lot of innovations over the next 3-5 years. Rapidly declining micro/embeddable computer prices, further improvements in the Data and Cloud Layer, and better stack integration will tip over the system to the next-gen very fast.
We are, in some sense, where mobile computing was in 2005 – lot of interesting action, lot of point innovations, lot of standards development, 3G/4G roll out for infrastructure, an impending sense of ‘something is about to happen’ and then boom – iPhone solve the Consumer UI challenge.