Cadence has launched the new Tensilica Vision Q6 DSP IP, delivering 1.5x more performance than the former Vision P6 DSP IP and 1.25X better power efficiency. According with Cadence, the mobile industry is moving from traditional feature-based embedded vision to AI-based algorithm, even if all use cases still have mix of vision and AI operations. The result is need for both vision and AI processing in the camera pipeline, translating into the implementation of both Vision Q6 DSP and C5 DSP to solve the complete camera processing pipeline.
Implemented in the Huawei Mate 10, Cadence Vision DSP enables advanced imaging applications like HDR video, image stabilization or hybrid zoom with 2 scene facing cameras. Compared to CPU or GPU, Vision P6 and now Q6 helps meeting high resolution video capture, thanks to their high-performance capability and battery life requirements, thanks to much better energy efficiency. The Vision P6 IP core also serves as the processing unit for AI processing in the MediaTek P60, that MediaTek call the Mobile APU.
If you look at the way MediaTek communicates about their P60, AI capability is as much highlighted as the power of the four ARM Cortex A-73 CPU as “users can enjoy AI-infused experiences in apps with deep-learning facial detection (DL-FD), real-time beautification, novel, real-time overlays, object and scene identification, AR/MR acceleration, enhancements to photography or real-time video previews and much more.”
Cadence Vision DSP are also implemented in chips supporting automotive application like the GW5400 camera video processor (CVP) from GEO Semiconductor where the Vision DSP enables ADAS functions such as pedestrian detection, object detection, blind spot detection, cross traffic alert, driver attention monitoring, lane departure warning, as well as target-less auto calibration (AutoCAL®). For such device, energy efficiency is key to meet the very-low-power, zero-air-flow requirements for automotive cameras.
According with Mike Demler, senior analyst at The Linley Group. “SoC providers are seeing an increased demand for vision and AI processing to enable innovative user experiences like real-time effects at video capture frame rates. The Q6 offers a significant performance boost relative to the P6, but it retains the programmability developers need to support rapidly evolving neural network architectures. This is a compelling value proposition for SoC providers who also want the flexibility to do both vision and AI processing.”
The race for higher performance in vision processing is impacting all kind of applications, as well as the emerging need to implement local AI engines. If we take a look around, we can list:
Over the next 4 years, there will be 3X increase in dual cameras and projections show that smartphone shipments integrating dual-sensors will be at 50%/50% in 2020.
On-device AI experiences at video capture rates is now a feature that help differentiate smartphone suppliers.
In robotic mapping and navigation, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it. Latency requirements for SLAM and image processing are decreasing, pushing again the need for speed.
On-device AI is required for object detection/recognition, gesture recognition and eye tracking.
Need to increase camera resolution and image enhancement techniques and for on-device AI for family/stranger resolution and anomaly detection.
This is probably the most demanding segment, as it requires increase in number of cameras and camera resolution. On-device AI is clearly a “must have” for ADAS for pedestrian/object recognition.
Drones and robots
360 capture at 4K or greater resolutions and advanced computer vision for autonomous navigation is required as well as on-device AI for subject recognition and scene recognition.
To increase performance, some obvious solution like increasing SIMD width or VLIW slots to bring more parallelism, implementing N-core to multiply the processing power or simply run the processor at higher frequency have severe drawback in term of power consumption, area impact or programming model.
Cadence has reworked the processor architecture, now based on 13-stages pipeline, and the Vision Q6 can reach 1.5 GHz peak frequency. Compared with the Vision P6, the Q6 delivers 1.5X performance for vision and AI applications, 1.5X frequency in the same floorplan area and 1.25X better energy efficiency at Vision P6 peak performance. To compare apple with apple, these data comes from an implantation on 16nm process in both cases.
As we can see on the above picture, the complete architecture of Tensilica Vision Q6 DSP has been reworked, with deeper pipeline, improved system bandwidth and imaging and AI enhancement for this 5[SUP]th[/SUP] generation Vision DSP IP.
ByEric Esteve fromIPnestShare this post via: