With a well-chronicled share inside cellular baseband interfaces for mobile devices, one might think that is the entire CEVA story, especially going into Mobile World Congress 2014 this week. MWC is still a phone show, but is becoming more and more about the Internet of Things and wearables, and CEVA and its ecosystem are showing solutions for these spaces.
One of the unique features in the Samsung Galaxy S4 was “smart pause” – pausing content playback when the user is distracted and looks away from the screen. This offers convenience so nothing is missed; it also serves as a power-saving feature. That same concept goes into facial activation, pioneered by Visidon and enabled with the CEVA-TeakLite-4 sensor fusion capability. Offloading the facial activation algorithm to a low-power DSP core means considerable power savings in a device, allowing the capability to be always-on in background waiting for the user. Visidon AppLock (available for Android in Google Play) also serves as a biometric security mechanism, looking for a particular user’s face before allowing access to an app.
Similarly, the popularity of Nuance Dragon, Apple Siri, Google Now, and Xbox One Kinect voice commands are driving user expectations for that kind of speech recognition capability in everything – even small, low-power devices. Again, the ability to remain always-on listening for voice commands calls for a low-power DSP core, and the CEVA-TeakLite-4 comes into play in the latest implementation of Sensory TrulyHandsfree Version 3.0. Sensory has some unique algorithms allowing users to be as far away as 20 feet while delivering commands, and the ability to filter out background noise, claiming 95% accuracy without false fires. Sensory has traditionally offered their own processing silicon, but teaming with CEVA allows the capability to be offered directly in CEVA-enabled SoCs.
Even more powerful solutions are using DSP in devices combined with the cloud to provide emotion recognition. The basic use case is to gauge reaction to content, while the user watches on a mobile device, game console, or digital signage platform with a front-facing camera, as the stream plays. Advertisers, content producers, political pollsters, and others can determine not only if their message was viewed, but how the viewer feels about what they see and hear without need for the user to respond to an overt poll request. CEVA has partnered with nViso to bring facial micro-expression recognition software to the CEVA-MM3101 vision platform, again with the implementation taking a fraction of the power otherwise needed. This embedded vision platform is integrated with CEVA’s Android Multimedia Framework – our own Eric Esteve provided background on AMF previously.
CEVA has even more on display in Barcelona at #MWC14; visit http://events.ceva-dsp.com/mwc14 for videos of demos of these and other DSP-enabled applications for mobile, IoT, and wearable devices, and follow @CEVADSP on Twitter.
The theme here is consistent: optimized DSP cores and algorithms can provide a huge power savings, even running complex voice and imaging algorithms, and enable more natural inputs for devices. This capability is going to get a lot more important for wearables, which will not have the luxury of virtual keyboards and larger touchscreens just due to their reduced size, and will need to be very power efficient for operation on small batteries. CEVA and their ecosystem are rising to the challenge and creating new solutions for designers working in these tight spaces.