The wearer says, “O.K., Glass” and glass leaps into action, performing most of the smartphone functions like check e-mails, take photos and videos, provide turn-by-turn navigation, and make and receive phone calls. Welcome to Smartphone 2.0.
Technology pundits called Google Glass the best thing that happened to augmented reality since the iPhone. What is the augmented reality? In this case, we can say it’s the interface between the wearable computing and the Internet of Things (IoT).
Google Glass: A marvel of embedded vision technology
Google Glass itself hasn’t been a smashing consumer success because of a number of strategic missteps, including a high price tag, lack of compelling applications and a poorly defined value proposition. It was a product ahead of its time when its prototype was launched back in early 2013.
However, it’s a revolutionary embedded design that has single-handedly created a new product category of Internet-hooked appliances: smart glasses. The new wearable product category—also labeled as smart eyewear—has attracted consumer electronics giants such as Epson, Intel, Microsoft and Sony as well as a new breed of Kickstarter outfits like Meta and Glassup.
The arrival of these 1.0 products is driving a gold rush in augmented reality, computational photography, and visual perception and analytics applications. However, this technology marvel is still in search of a cause a.k.a. utility, and at the same time, is fighting a few design conundrums. And the two issues are intertwined. In other words, the success of smart glass use-cases is closely tied to the evolution of product design.
Anatomy of Smart Glass
A smart glass, for instance, can help people with impaired sight by navigating their surroundings. Then, it can provide the company men with the access to computing and corporate data on the go about a warehouse, product manual, sales demo and more. However, the design of a smart glass is a balancing act between sleek design, robust processing performance and energy efficiency.
The early designs like Google Glass comprised of a single camera. However, the thicker form factor and dual-lens arrangement in a smart glass design provide a natural premise for dual-camera stereoscopic designs. It also appeals to depth discerning sensors that can complement object recognition tasks through high dynamic range and advanced pixel interpolation.
Smart glass uses internal and external sensors to generate information
Smart glasses are equipped with sensor fusion and boast components like GPS, accelerometer and gyroscope. So they can employ object detection and matching technologies merely by using a robust vision processor and provide an accurate discernment of finely detailed gestures that are used to control various functions in a smart glass.
Moreover, depth sensors can facilitate selective refocus on a portion of a scene during the post-image-capture stages. Then, there are 3D imaging technologies that can generate a depth map on the wearable device, and use the point cloud map for image classification and estimation in cutting-edge applications like augmented reality.
Glass’ Design Conundrum
The common perception about the connected wearable design is that a device like smart glass can simply tether processing-heavy tasks such as object recognition and gesture interface to a smartphone or these vision processing functions can be conveniently moved to the cloud. That popular design premise deserves a serious review because, first and foremost, it’s imperative for a connected wearable device to hold some degree of intelligence in order to avoid becoming a dumb terminal.
Furthermore, smart wearables must carry some processing capability to reduce the amount of data transfer to a smartphone over a Bluetooth or Wi-Fi link. Likewise, sending raw videos to the cloud over a cellular broadband connection means an increase in cost and power consumption. The large amounts of data transfer will consume power and drain the wearable device batteries that are much smaller than those of smartphones.
Smart glasses need sleek batteries and power efficient chips to carry out a full day of usage. However, while new battery technologies are still far from commercial realization, semiconductor IP companies like CEVA now provide a venue for power efficiency through specialized vision processing solutions that free up CPU and GPU for their original design tasks.
The CPUs and GPUs have initially volunteered to carry out image-processing tasks, but now dual-camera design and advanced sensor capabilities in smart glasses increasingly demand powerful vision processing solutions. Vision processing—the workhorse of smart glass operations—uses powerful algorithms for sophisticated image and scene analysis that in turn require a significant amount of computation.
Next-generation vision applications demand a specialized processor
Take object detection and matching, for instance, which typically use SURF and SIFT algorithms; these tasks are now moving to more advanced deep learning technologies like CNN to meet the needs of 3D vision, computational photography and visual perception. CEVA’s XM4 imaging and vision processor IP is designed to offload CPU and GPU from compute-intensive algorithms for image enhancement, computational photography and computer vision.
The instruction set in the CEVA-XM4 vision processor is optimized and defined for computer vision technology. It has a number of features optimized for bandwidth transfer—such as random access parallel load—and that leads to a smaller DSP with a far better cycle count. That, in turn, results in lower power consumption compared to imaging solutions based on GPUs and ARM+Neon settings.
The wearable devices like smart glass can bring a renewed push toward computer vision and computational photography by employing advanced camera subsystems that carry out image capture and vision processing in a power efficient manner. The integration of intelligent vision processor IPs like XM4 into smart glass system-on-chips (SoCs) offers that design venue for robust processing performance at affordable power consumption.
Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics and The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.Share this post via: