WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 165
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 165
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)
            
CEVA Waves Banner SemiWiki 800x100 240407
WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 165
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 165
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)

Sensor Fusion in Hearables. A powerful complement

Sensor Fusion in Hearables. A powerful complement
by Bernard Murphy on 10-29-2020 at 6:00 am

I must admit I’m impressed with how CEVA is pulling together foundational solutions for advanced consumer electronics. This through their own technologies (DSP, audio, vision, neural nets) and a rapid pace of partnerships, investments and acquisitions. Off the top of my head, I remember recent announcements on Immervision for image correction for wide angle lenses, partnerships for 3D/spatial audio, and Hillcrest Labs for motion sensing and fusion. I’ll talk here about the last one, as applied to hearing, sensor fusion in hearables.

Sensor Fusion in Hearables

Why sensor fusion in hearables?

Isn’t that an OEM problem, not an audio IP problem? Not according to Seth Sternberg, product manager at CEVA. Fusion is playing an increasingly important role in hearables. Start with the stuff we already know. Tap on an earbud to take a call or start a call – That tap is detected by an accelerometer. Take an earbud out of your ear and music stops playing. Put it back in and music starts again. Both based on proximity detection. So far, no fusion needed. But these sensing techniques are imperfect. For example, if I take the earbud out of my ear and hold it in my hand or lay it face down on a table, the sensor thinks its back in my ear. Fuse the proximity detection with other sensors, eg an accelerometer to detect motion, and you cover some of those cases better.

3D audio

3D audio is another use case. This is where an audio source is positioned in space, and as you move your head the apparent source position remains constant. Virtual audio like virtual video. To track this takes 9-axis motion and movement detection – gyroscope, accelerometer and magnetometer. These fuse together and then guide either object-based audio or ambisonics to create the illusion of a fixed audio source. Apple just put this in the AirPods Pro and both the PS5 and Xbox support this feature for gaming. Another application is fitness tracking. There’s a definite case for putting this in earbuds since our heads may move more reliably than our wrists during exercise.

Context awareness

Context-aware fusion is another application. If I’m in a noisy area, or if I’m running, why not have the earbuds crank up the volume in my earbuds a little.?Or perhaps I’m making a call, again in a noisy area. A sensor can detect when I am speaking because my jaw is moving. A front-facing microphone can amplify pickup when I am speaking and cancel pickup otherwise. Cutting down most of the background babble.

The devil’s in the detail with sensors

This is just a sample of possible applications for fusion in audio.  Now dig a little deeper. All of the sensing depends on processing inputs from MEMS devices, then combining that input. These are imperfect: they’re noisy, they drift, and they must be recalibrated regularly against other sensors and other inputs. Handling this task requires special expertise and detailed understanding of sensors from multiple manufacturers: STMicroelectronics, Bosch-Sensortec, TDK InvenSense, and others. Recalibration adds another wrinkle. If there had been a noticeable drift, the wearer doesn’t want to experience a sudden correction. The correction must be handled smoothly or transparently. Conversely, when detecting motion in a robot (say computer vision for a vacuum cleaner), you want immediate correction. The robot doesn’t get motion sickness and it should avoid crashing into the dog. Which underlines that how a application manages corrections must be sensitive to the application.

An easier life for OEM’s, a better experience for users

The same is true for fusion according to Seth. Some expectations are pretty uniform across applications, others need to allow for application tuning. All of which can make sensor fusion a nightmare for OEMs. One of CEVA’s big aims in this release of MotionEngine Hear is to shoulder most of this burden. This software package handles all of the direct interface with the sensors, noise management, recalibration and fusion, so that OEMs can focus on what will differentiate their product. And for the end-user, the software brings context-awareness, and closes some of the gaps for true wireless stereo earbuds, such as more accurate in-ear detection. MotionEngine Hear is platform-agnostic, though naturally they’d love to have you run it on CEVA platforms. You can learn more HERE.

Also Read:

Low Energy Intelligence at the Extreme Edge

Combo Wireless. I Want it All, I Want it Now

Wi-Fi Bulks Up

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.