Qualcomm’s teaser of its upcoming Snapdragon 820 system-on-chip (SoC) was supposed to make up for the issues like overheating and bad press that haunted its predecessor Snapdragon 810. Instead, the San Diego, California–based semiconductor giant chose to show off the GPU and image processing muscle. Especially, its Spectra image processor is painting a rosy picture of the next-generation camera applications like 3D vision, augmented reality, virtual reality and deep learning.
Qualcomm once more reiterated its strategic focus on computer vision technology when it released more details of the Hexagon 680 DSP inside Snapdragon 820 at the Hot Chips conference in Cupertino, California on August 24, 2015. The Hexagon 680 is the next iteration of Qualcomm’s DSP technology that it uses to offload multimedia tasks from the CPU cores in Snapdragon chips.
Machine vision is creating new opportunities for smartphones
Qualcomm’s new DSP technology boasts heavy vector engine—that it calls Hexagon Vector eXtensions or HVX—for compute-intensive workloads in computational photography, computer vision, virtual reality and photo-realistic graphics on mobile devices. Moreover, it expands single instruction multiple data (SIMD) from 64-bit to 1,024-bit in order to carry out image processing with a wider vector capability.
Qualcomm is using three DSPs in Snapdragon 820 chip: one for image processing, one for wireless modem and one for always-on sensor listening. However, it’s the Hexagon 680 DSP-based Spectra image signal processing unit that is drawing the most headlines. It’s centered around the premise of enhanced computer vision that now aims to take smartphones and tablets to an entirely new level of imaging experience.
Why DSP Matters in Computer Vision
The vision applications have been largely relying on CPUs, GPUs, FPGAs and DSPs, but for mobile devices like smartphones and tablets, programmable DSP solutions are becoming a strategic choice because they consume less power and die area on vision SoCs. The leverage comes from the fact that instruction sets in DSPs are focused on single-core performance and are tailored for specific applications like audio or image processing.
Moreover, ISAs in DSP cores are often driven by very long instruction word or VLIW, which uses multiple executions units in parallel to carry out a single instruction. And that significantly boosts optimization of specific applications like image and video processing. Next, DSPs offer support for critical features such as histograms, LUTs and sliding window filters at the ISA level.
DSP is now the third main processor in mobile SoCs along with CPU and GPU
Qualcomm’s tightening focus on next-generation camera applications with the launch of Hexagon 680 is a stark reminder that DSP engines are going to be the workhorse of computer vision and other imaging-centric apps in smartphones, tablets and wearable devices. It’s a sea change in mobile SoC design, which is the harbinger of the camera envy that consumers will most likely see in smartphones and tablets coming to the market in 2016.
And the DSP-centric image and video processing pitch is coming from a chipmaker that has the reputation of being a step ahead in the mobile semiconductors market. The mobile silicon powerhouse Qualcomm has set the benchmark for dual camera and dual sensor applications in smartphones, and it’s a testament that in the mobile SoC recipe, image processor is now positioned as the third most important processor after CPU and GPU parts.
The Other Vision DSP
Another company that has been advocating DSP-based solutions for computer vision on mobile devices is CEVA Inc. The supplier of DSP cores has recently launched the XM4 vision processor—a low-power DSP and memory subsystem IP core that has been designed from the ground up to meet heavy computing needs of image processing and computer visions applications on mobile devices.
The CEVA-XM4 is the company’s fourth-generation imaging and vision processor IP that boasts a mix of scalar and vector engines, VLIW, and SIMD functions for heavy-duty signal processing workloads. It also features a power scaling unit (PSU) that allows SoC designers to scale power according to application requirements and thus minimize the overall power consumption.
XM4 is designed for mobile and embedded vision systems
The CEVA-XM4 is a vision-optimized DSP engine that offloads the compute-intensive imaging algorithms from CPUs and GPUs so that designers of mobile devices can employ advanced algorithms and avoid compromises on image quality and battery life. The vision algorithms that XM4 processor supports include real-time 3D depth map generation and point cloud processing for 3D scanning, object and image recognition, and deep learning technologies like convolutional neural networks (CNN).
These vision algorithms will create a matrix of possibilities for the smartphones of 2016 for whom two cameras on the back and one camera on the front are going to be a norm. These phones with mega sensors and high-resolution screens will enable a new breed of features encompassing 3D vision, computational photography, visual perception and analytics. And that’s a lot of work for CPU and GPU on the application processor of a smartphone.
A DSP running at half the clock speed of CPU can achieve similar results in terms of image processing. Likewise, using GPU as a compute engine in vision processing applications can yield lower performance due to strict memory constraints. So there might be more mobile SoC makers lining up a vision processor next to CPU and GPU and make the best of the new era of computer vision on smartphones and other mobile devices.
Snapdragon 820 SoC Finds Qualcomm at Crossroads
New CEVA-XM4 Vision IP Does Point Clouds and MoreShare this post via:
0 Replies to “Computer Vision in Mobile SoCs and the Making of Third Processor after CPU and GPU”
You must register or log in to view/post comments.