WP_Term Object
(
    [term_id] => 18
    [name] => Intel
    [slug] => intel
    [term_group] => 0
    [term_taxonomy_id] => 18
    [taxonomy] => category
    [description] => 
    [parent] => 158
    [count] => 426
    [filter] => raw
    [cat_ID] => 18
    [category_count] => 426
    [category_description] => 
    [cat_name] => Intel
    [category_nicename] => intel
    [category_parent] => 158
)

Makers get access to Intel RealSense

Makers get access to Intel RealSense
by Don Dingee on 01-08-2015 at 11:00 pm

One of the great devices in maker lore is the Polaroid 6500 Series Sonar Ranging Module. It was originally part of the autofocus system for their SX-70 cameras circa 1978, long before through-the-lens optical autofocus sensors were perfected. Back then, people couldn’t focus. Dr. Land thought he was teaching people to compose images, not just point and shoot. Earlier models of the SX-70 with a focusing view screen enhanced by a rangefinder prism were producing too many fuzzy pictures. The ranging module bounced ultrasound off the primary subject and focused accordingly, even in low visible light.

Polaroid wanted to reduce costs and stimulate applications for the technology. In kit form with an ultrasonic transducer and a small logic board, the 6500 Series measured distances from 6 inches to 35 feet. This gave makers working on toys, robots, and other ideas an easy way to remotely measure distance from the perspective of the host.

Here it is 2015, and guess what? People still can’t focus, even though there are more pictures being shot on smartphones and tablets than ever. There is nothing worse than taking a picture of a critical, once-in-a-lifetime moment and finding out the shot is out of focus. More accurately, the subject of interest is out of focus in the scene – the camera usually decides to focus on something, sometimes not what you wanted. Photoshop can work some miracles with unsharp masks and Smart Sharpen, but too little detail or too much shake and the image is toast. Image stabilization and other computational photography techniques help a lot.

Intel showed off quite a few buzzworthy demos at CES 2015. One was bringing RealSense back for a longer look, with more details after a slight tease at last year’s event and several Jim Parsons commercials over the recent holidays. There are two keys to RealSense technology: the 3D imaging subsystem, and the algorithms behind them.

There are erroneous reports it is a 3D camera (and Intel even calls it that for simplicity, hoping nobody would notice). Smartphone enthusiasts may recall that craptastic fad from just a couple years ago, stereoscopic optical images from dual lenses. Amazon also got the 3D label with the dynamic perspective sensor system on the Amazon Fire Phone, which uses four front-facing cameras plus infrared LEDs to sense how a user is holding and looking at the device and adjust the display.

Intel RealSense goes much farther than either. It uses three physical pieces to capture images: a normal optical imager, an infrared laser projector, and an infrared imager. RealSense actually scans the entire scene up to 30 meters away in infrared, measuring distances from the optical sensor to each pixel, and with some computation the real-world dimensions between pixels in the scene. A closer comparison is Microsoft Kinect, which uses the same basic tactic of infrared scanning and 3D reconstruction, but only up to 4m.

Distance does wonderful things. It allows computational refocusing of an image, since the depth is known across the scene. It also allows gesture recognition, even emotional analytics, in a far more advanced way than just a 2D digitized set of pixels can convey. It also powers drones to recognize and avoid objects, again not just a blob of pixels in a scene but a known object with position, vector, and velocity.

RealSense also allows real-time extraction of objects from a scene. For instance, I could replace the background for a video chat shot in my office with something more interesting, like seats on the 3B line at Angel Stadium of Anaheim.

Just as Amazon and Microsoft have done, Intel has an SDK for developing RealSense applications. It is broken into three parts: a front camera suite for gestures and interaction, a computational photography suite for photo processing, and a coming-soon rear camera suite for augmented reality and other ideas. They support Windows 8.1, and claim to need a 4th Generation Intel Core Processor to have enough horsepower for running the SDK effectively. They are also offering a standalone RealSense camera kit, manufactured by Creative.

We’ve come a long way since the chirpy Polaroid kit. We’ll see if RealSense catches on, gaining support in smartphones and tablets. In the meantime – makers, away.

Share this post via:

Comments

0 Replies to “Makers get access to Intel RealSense”

You must register or log in to view/post comments.