WP_Term Object
    [term_id] => 21
    [name] => CEVA
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 127
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 127
    [category_description] => 
    [cat_name] => CEVA
    [category_nicename] => ceva
    [category_parent] => 178
    [is_post] => 1

Compute at the Edge

Compute at the Edge
by Bernard Murphy on 05-01-2019 at 7:00 am

At first glance, this seems like a ho-hum topic- just use whatever Arm or RISC-V solution you need – but think again. We’re now expecting to push an awful lot of functionality into these edge devices. Our imaginations don’t care about power, performance and cost; everything should be possible so let’s keep adding cool features. Of course reality has to intrude at some point; edge nodes often aren’t plugged into a wall socket or even into a mobile-phone-class battery. Power and recharge constraints (as well as cost) don’t necessarily mean our imagined products are unattainable but they do require more careful thought in how they might be architected.

Start first with what we might want to build. Cameras must continue to at least keep pace with your cellphone camera so have added remote control and voice-activation. VR and AR headsets need to recognize your head and body position to correctly orient a game scene or position AR overlays in real-world scenes. Headphones are becoming increasingly smart in multiple ways, recognizing you through the unique structure of your ear canal, recognizing voice commands to change a playlist or make a call, detecting a fall or monitoring heart rate and other vital signs. Home security systems must recognize anomalous noises (such as breaking glass) or anomalous figures/movement detected on cameras around the house.

Each of these capabilities demands multiple compute resources. First and most obviously, you need communication; none of these wonderful products will be useful standalone. In some cases, communication may be through relatively short-range protocols such as Bluetooth or Wi-Fi, in other cases you may need cellular support, through NB-IoT for small packet transfers (such as from a parking meter) or through LTE or 5G for broadband support (drones or 4k/8k video streaming for example). Whichever protocol you choose, you need a modem, and for cellular it probably needs to support MIMO with beam-forming to ensure reasonably connectivity.

Modems are specialized beasts, usually best left to the experts. You could buy a standalone chip, but then your product needs at least 2 chips (one for everything else). That makes it more expensive and more of a power hog, definitely not extending to a 10-year battery life, maybe not even 10 hours. The best choice for PPA is an integrated modem with tight power management, especially for the power-amp.

Now think about compute for sensing, where a good example is a 9-axis sensor, fusing raw data from a 3-axis accelerometer, 3-axis geomagnetic sensor and 3-axis gyroscope, such as you might use in a VR/AR headset. Together these sensors can provide information on orientation and movement with respect to a fixed Earth frame, which is just what you need for a realistic virtual gaming experience or orienting virtual support information and controls against a real machine you want to manage.

This fusion requires yet more compute, heavily trigonometric along with filtering, which could be accomplished in a variety of ways but needs to be more or less always-on during use. You could make some allowance for human response times, perhaps allowing for update every 1/60[SUP]th[/SUP] of a second, but that’s still pretty continuous demand. Again you could get this through an integrated chip solution, but for all the PPA reasons mentioned earlier an ideal solution would be embedded in your one-chip SoC. And since the fusion algorithms are math-intensive, a DSP is a pretty natural fit.

One more example – AI embedded in your product. AI is ramping fast in edge-based devices in a lot of use-cases; here let’s consider just voice-control. This needs multiple components – audio beamforming, noise management and echo-cancellation, and trigger-word recognition at minimum. Beamforming, echo cancellation (especially indoors) and noise filtering are all DSP functions. Perhaps you could prove these are possible on some other platform but you’d never compete with DSP-based products. Trigger-word recognition gets into neural nets (NN), the heart of AI. And in many cases it needs to be combined with voice recognition – recognizing who is speaking rather than what is being said. Again, DSPs are a well-recognized low-power, high-performance option in the NN implementation spectrum, above CPUs, FPGAs and GPUs (and below full-custom solutions like the Google TPU).

GPUs are very well known in the AI domain, but primarily in NN training and in prototypes or cost/power-insensitive applications. Mobile VR headsets you may have seen are likely to be based (today) on these platforms but they’re expensive (~$1k for the chip alone) and deliver short battery lives (I haven’t heard the latest on the Magic Leap, but I do know you need to wear a battery on your belt and they have been cagey about time between charges – maybe a few hours at most).

Finally, full operation of your ground-breaking product requires some level of remote functionality but you probably don’t want to depend on it being up all the time. And probably you would prefer that sensitive information (health data, credit cards, face-id, etc) not travel over possibly insecure links to possibly hackable cloud-based platforms. You don’t want your semi-autonomous drone crashing into a tree because it lost line of sight with a base station or flying off to someone else who figured out how to override your radio control. That means you need more intelligence and more autonomy in the device, for collision avoidance, for path finding and for target object detection, without having to turn to the cloud. Which means need for more AI at the edge.

All of the functions I have talked about here are supported on DSP platforms and some can potentially be multiplexed on a single DSP. You probably still want a CPU or MCU as well, for administration, authorization, provisioning and whatever other algorithms you need to support. Not so much for the AI; you can get basic capabilities on CPUs/MCUs but they tend to be quite limited compared with what you can find on DSP platforms. If you want to learn more about what is possible in communication, sensor fusion and AI at the edge, check out CEVA.