hip webinar automating integration workflow 800x100 (1)
WP_Term Object
(
    [term_id] => 50
    [name] => Events
    [slug] => events
    [term_group] => 0
    [term_taxonomy_id] => 50
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 1227
    [filter] => raw
    [cat_ID] => 50
    [category_count] => 1227
    [category_description] => 
    [cat_name] => Events
    [category_nicename] => events
    [category_parent] => 0
)

Embedded Vision Summit: the How-to and the How-to

Embedded Vision Summit: the How-to and the How-to
by John Swan on 06-19-2014 at 12:08 am

When I realized I had the opportunity to attend the Embedded Vision Summit (EVS) if I would change a return flight to a day earlier, I didn’t hesitate. Thankfully I was able to change my flight without any nuisance fee from the airline, and attended EVS.
 There were two “How-to’s” at this Summit:

  • The algorithmic How-to, which includes

    • Object detection
    • Object recognition
  • The design How-to, which includes

    • IP aspect
    • EDA tooling aspect

The morning Keynote presentation, “Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems” by Yann LeCun, of Facebook and New York University, was a good example of the algorithmic How-to. Object detection is difficult enough, with objects moving in several ways: translationally (across the field of view), moving closer or farther, rotationally around an axis, and changing shape – like a person reaching out their arms, or even talking. Even more difficult is object recognition.

Not a lite keynote, LeCun dug into the algorithm and experimental results. The algorithm has a brief learning process after which it is able to give good probabilities that the object matches with one for which it has gone through the learning process. LeCun demonstrated the algorithm by using an attached webcam to his PC and aiming at different scenes from the podium: His face, his shoes, the right side of the audience, the left side, etc. After aiming the webcam and pushing the Learn button he was then able to aim the camera at the various scenes and showing a probability histogram of what learned ‘object’ the scene matched with. LeCun Presented for an hour and kept the audience captivated. Just when you thought he might be wrapping up he was pushing on to something new.

Jeff Bier has the design How-to expertise, which I have been aware of since about 1997 when I was at Motorola Corporate Labs. Jeff has always been up on DSP design tooling – it fits with the traditional them of BDTi which he founded to do DSP processor benchmarking. Multimedia such as embedded vision (if you can call it multimedia) is the next higher-order of signal processing.

Jeff did 2 presentations, one entitled “What’s New in Tools for Vision Application Design and Development?” In order to extract knowledge from embedded vision we rely on the hardware: processors, sensors, etc.and the Software: algorithms, libraries, APIs – the tools to get both of them together and working. Jeff highlighted 3 main software development environments: OpenCV, OpenCL, and OpenVX, an emerging Khronos standard API providing a vision hardware acceleration (abstraction) layer. Khronos has information on OpenCL and I will leave it up to the reader to do some further research on those. Jeff also told us about development kit for support of embedded vision.

After attending this Summit I intend to attend the future Summits!

You can access the Summit presentations here if you are registered on the Embedded Vision Alliance website.

(Submitted from DAC, where there’s a lot more on the How-to)

lang: en_US

Share this post via:

Comments

0 Replies to “Embedded Vision Summit: the How-to and the How-to”

You must register or log in to view/post comments.