I wrote last year about Eta Compute and their continuously tuned dynamic voltage-frequency scaling (CVFS). That piece was mostly about the how and why of the technology, that in self-timed circuits (a core technology for Eta Compute) it is possible to continuously vary voltage and frequency, whereas in conventional synchronous logic it’s only possible to switch between a few discrete voltage and frequency options. You might think ‘self-timed, this must be about performance’ but in fact Eta Compute is pushing it for ultra-low power at the extreme edge in AI applications.
I haven’t talked with them in a while, so I confess I’m catching up. From what I see, it looks like they’ve found their sweet spot, a very targeted application in power constrained applications where some level of inference is required. They cite as examples intelligent sensing and/or voice activation and control in:
- Building: thermostats, smoke detector, alarm sensors
- Home consumer: washing machines, remote control, TV, earbuds
- Medical and fitness: fitness band, health monitor, patches, Hearing aid
- Logistics: asset tracking, retail beacon, remote monitoring
- Factory: motors, industrial networks, industrial sensors
The most recent Eta Compute solution is realized in their ECM3532 neural sensor processor. This is a system on chip with an Arm Cortex-M3 processor and an NXP CoolFlux DSP, 512KB of Flash, 352KB of SRAM, and supporting peripherals. All of this is built with Eta Compute’s proprietary CVFS (continuous voltage frequency scaling) technology, operating near threshold voltage.
The dual-MAC DSP handles signal processing from sensors, feature extraction and inferencing. The MCU handles application software, control and networking. I’ve seen this combo in other products (though not built on CVFS technology) so it looks like an up and coming architecture to me.
Eta Compute’s benchmarking shows the Cortex MCU running at up to 10X lower power than competitive solutions across a wide range of temperatures and process corners. Even more important, they ran a range of neural net benchmarks: image recognition, sound recognition (eg glass breaking), motion sensing, always-on keyword recognition and always-on command recognition. In all case they are running in a few hundreds of micro-amps and performing multiple inferences per second (up to 50 for motion sensing).
Overall, Eta Compute say they can already reduce power in AI at the extreme edge by a factor of 10. This is for published networks, not specifically optimized to extreme edge applications. They have been running trials with partners to further optimize networks and have already demonstrated an additional 10X increase in efficiency in image recognition through reducing operations by a factor of 10 and weight sizes by a factor of 2. Comparing that with a common MCU-only implementation, they claim 1000X higher efficiency.
At these numbers, intelligence at the extreme edge could become ubiquitous, even down to truly remote, coin-cell operated devices, asset-tracking devices, even energy-harvesting devices. Eta Compute don’t yet want to provide customer names but sounds like they have quite a few already in development.
Eta Compute recently released a white paper – Deep learning at the extreme edge: a manifesto – on their vision and technology. You can download the white paper HERE.