WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)
            
CEVA Waves Banner SemiWiki 800x100 240407
WP_Term Object
(
    [term_id] => 21
    [name] => Ceva
    [slug] => ceva
    [term_group] => 0
    [term_taxonomy_id] => 21
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 160
    [filter] => raw
    [cat_ID] => 21
    [category_count] => 160
    [category_description] => 
    [cat_name] => Ceva
    [category_nicename] => ceva
    [category_parent] => 178
)

Glasses and Open Architecture for Computer Vision

Glasses and Open Architecture for Computer Vision
by Bernard Murphy on 09-18-2019 at 6:00 am

You know that AI can now look at an image and detect significant objects like a pedestrian or a nearby car. But had you thought about a need for corrective lenses or other vision aids? Does AI vision decay over time, like ours, so that it needs increasing help to read prescription labels and identify road signs at a distance?

Fisheye view

In fact no. But AI-assisted vision, generally called Computer Vision (CV), trains on undistorted, stable images in decent lighting. Let’s pick those assumptions apart, one at a time. To get a nice flat-field image in front of (or behind) your car you could use multiple cameras with relatively narrow-angle lenses, studded along the fender. That would be very expensive and power-hungry. Or you could use a single camera with a very wide-angle lens (see above). Much better cost-wise but there’s a bit of a problem with distortion.

This is correctable through a process known as dewarping, a geometric transformation of the image and a process which is already well understood. Image stabilization is another familiar technique, correcting for the jitters in your hand-held camera or a GoPro on your helmet as you’re biking down a rocky slope.

There are fixes for these problems today, but these generally add devices or multi-purpose IPs, cost and more power consumption. That can be a real problem in consumer devices because we don’t like more expensive products and we don’t want our battery to run down faster. It’s also a problem for the sensors in your car. More AI processing is moving to the sensors to reduce bandwidth load on the car network, allowing sensors to send objects rather than raw images to the central processor.

CEVA and Immervision, a developer/licensor of wide-angle lenses and image processing technologies, announced a strategic partnership just last month. For a significant investment in Immervision, CEVA gained exclusive licensing rights to their portfolio of patented wide-angle image processing technology and software. CEVA has also licensed technology from as a part of this deal for better image quality and video stabilization.

(Incidentally, as a part of the same deal, CEVA also licensed Data-in-Picture technology which integrates within each video frame fused sensory data, such as that offered by Hillcrest Labs, also recently acquired by CEVA. CEVA seems to be putting together a very interesting business proposition in CV – watch this space.)

If you need a low cost, low power solution in a consumer device or a car or in many other applications, it makes sense to integrate these capabilities directly into your CV solution. That’s what CEVA have done with their just-announced NeuPro-S IP which bundles in the vision processing software. So you can have a single fisheye backup camera at low cost, low power and probably higher reliability than multi-chip solutions.

There are a lot of other interesting features in the NeuPro-S including integrated SLAM and safety-compliance for which I’ll refer you to the website link below. But there is one feature I thought worthy of special mention in this short blog. Multiple AI accelerators from multiple sources are starting to be integrated together in single-chip implementations. This raises an interesting question – how do you download training to all these accelerators? Standalone solutions per accelerator don’t look like a great solution.

CEVA have invested heavily in their CDNN deep-learning compiler mapping with optimization from the most common training frameworks/networks to inference networks on edge devices. The optimizations include advanced quantization algorithms (mapping from floating-point to fixed-point), data flow management and optimized CNN and RNN libraries to run on the edge.

Now CEVA have opened up the CDNN interface, through a feature they call CDNN-Invite, to support not only NeuPro and CEVA-X and XM platforms but also proprietary platforms, making support for heterogenous AI a reality on edge device while still keeping the simplicity of a unified compiler interface. I like that – open interfaces are almost always a plus.

You can learn more about the NeuPro-S HERE. (need link)

 

 

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.