Recently, I mentioned smartphone SoCs consume one, maybe two orders of magnitude too much power for broader use in wearables. However, that is only when they are “on”. To save power and stretch battery life, smartphones spend a lot of time napping – display off, sitting still with MEMS sensors powered down, waiting for an incoming phone call or text.
The wearable use case – beyond the head-shrunken smartwatch, there are actually several use cases – can be quite different. Most are built around one idea: when worn, it is “always-on”, posing a conundrum for designers to deliver functionality while using less power.
By definition, an activity monitor needs its MEMS sensors always-on and capturing data, even if the wearer isn’t moving. Fitness is not all about elevated vital signs during exercise – many of these devices profile sleeping habits, seating posture, and more for a complete picture of activity and quality of rest or sleep.
Sensors are in fact the primary function of a wearable, beyond just variable readings, transformed into context via system-level algorithms. For example, a MEMS inertial measurement unit with accelerometer, gyroscope, magnetometer, and pressure sensor can become part of FootSLAM, an algorithm to map the interior of buildings by following pedestrian footsteps.
Wearable displays range from a compact LCD – nowhere near the now typically bigger-than-pocket smartphone display in size – to a minimal LCD dot matrix, to basically none with only a status indicator LED. The result is obviously greatly reduced display power, but also relieving the need for a mobile GPU running Open GL ES in many cases.
User interfaces for wearables also vary, from simple pushbuttons or small touchscreens to voice-triggering or even facial recognition. Many industry observers have said the next frontier is ear-mounted devices with a voice-triggered interface, with the usual microphone and speaker plus a powerful set of sensors and software for activity monitoring, access control, and more.
Then, there is wireless connectivity – typically a must-have, but a bit of a problem. As one reader suggested, as soon as a PHY turns on with any kind of transmit power, the power consumption equation radically shifts. Choosing a low overhead protocol such as Bluetooth Low Energy, processing a stack efficiently, and minimizing radio transmit time are all part of the strategy.
Moving from a general purpose smartphone SoC, capable of running a high performance operating system and thousands of applications, to a concisely pre-defined set of wearable functions presents a big opportunity for designers in achieving always-on. As CEVA’s Moshe Sheier puts it, an ultra-low power hybrid RISC-DSP core may be able to handle the entire wearable function set.
CEVA is taking its TeakLite-4 v2 core (a deeper look from Eric Esteve) into wearable territory, offering a combination of Bluetooth LE, audio/voice functions, and sensor fusion capability. A DSP approach comes into play as sensor fusion calls for algorithms like Kalman filtering and Rao-Blackwellized particle filters. CEVA has also added instructions supporting Bluetooth baseband operations.
For voice triggering functions, CEVA has partnered with Alango for functions like echo cancelling, feedback reduction, dynamic range compression, nose reduction, and more. For visual triggering, we’ve previously looked at Visidon, able to take an always-on camera and recognize faces or gestures.
Several readers commented in a previous discussion on wearable SoCs that ultra-low power figures are subject to what is being run; with that in mind, CEVA claims the CEVA-TL410 core in 28nm runs an always-on mix of voice triggering, face triggering, audio playback, and sensor fusion in around 0.8 mW.
For more on what CEVA is doing for wearables, a new TechOnline webinar is available:
DSP solution for always-on audio/voice/sensing and connectivity in wearable devices
For wearables to live up to the broader scale of expectations, new SoCs need to be developed tackling the always-on environment with new user interfaces plus context awareness driven by sensor fusion. This will mean a new approach to core design, targeting a specific function set in an ultra-low power envelope.
lang: en_US
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.