Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/impressions-from-embedded-vision-summit-2019.11389/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Impressions from Embedded Vision Summit 2019

Al Gharakhanian

New member
I had an opportunity to visit the showcase of the Embedded Vision Summit (held this week in Santa Clara Convention Center). The event was certainly bigger and more colorful than the previous ones. Aside from the usual vendors, I came across many new names offering various products, services, and tools covering the gamut in the field of computer vision. Clearly AI-based vision processing and automotive overshadowed the traditional computer vision technologies and use cases. As expected, there were number of new companies announcing their new vision-optimized AI accelerator chips and Intellectual properties. (I must say that it is becoming increasingly difficult to keep track of all the players in this domain and I might be forced to add a new tab to my excel sheet. Reducing the font size does not cut it anymore).
I was struck by one notable trend and that is vertical integration. There were several vendors that had offerings covering the entire spectrum for the automotive application. They offered autonomous driving software stack, ADAS (Advanced Driver Assistance Systems) hardware, AI accelerator ASICs, and in some cases even their IP.
While there were many promising companies with breakthrough technologies, I was particularly impressed by the technology and achievements of a few that are most relevant to this forum. Below is a brief summary:

Mythic
Mythic (www.mythic-ai.com) is a fabless semiconductor company based in Austin and Redwood City building AI edge accelerator chips. Their initial target applications are smart speakers, drones, battery-powered video monitors specialized smartphones for certain vertical markets. Their implementation of deep neural networks is based on flash-based analog technology. This approach reduces the need to retrieve network weights from memory leading to a significant power savings (10x).
I felt they have a very clear and focused mission and were able to effectively convey their differentiation (10x power savings). They showed various reference boards with different form factors. One in particular caught my eye and that was a PCIe card hosting eight of their chips. I am guessing they are targeting power-restricted edge servers. Keep in mind all server companies are building edge servers that are only slightly bigger and thicker than an iPad, with hardened enclosure and having a fan is not an option.
While the power advantage of their analog approach is undisputable, this advantage becomes less pronounced when the competitors opt to use finer process nodes. Despite such a competitive threat, I have never come across a startup with innovative technology that has been able to maintain their lead without reinvention. My guess is that Mythic will continue to innovate.

Xnor.ai
Xnor.ai (www.xnor.ai) is a company based in Seattle that has developed very low footprint and optimized Deep Learning CNN models for computer vision tasks. These models can run on low-power SoC’s with embedded Arm CPUs or dedicated computer vision hardware. Their initial target markets are Home Security, Home Automation, and Smart Appliances. They seem to have a close partnership with Ambarella. The beauty of their story is that they can enable “legacy hardware” to tackle complex vision processing tasks such as object, or face identification as well as more complex tasks dealing with live video feeds. As an example, large video surveillance camera companies can add AI-based vision processing features without major hardware rework.
I would guess their technology is a godsent for traditional MCU companies (the likes of Microchip, On Semi., Silicon Labs, and ST) that are yet to have MCUs with dedicated AI acceleration hardware.
Like any other upstart, they also have their challenges. Computer vision models are getting bigger and more complex and there are limits on what optimization and compression can achieve.


Hailo
Hailo (www.hailo.ai) is an Israeli company that has been able to accomplish a tremendous amount with relatively limited funding (~$25M I think?). I saw live demos of their Hailo-8 chip and was impressed by its performance metrics. Tiny chip with no heat sink. Impressive peak performance of 26 TOPs and 2.8 TOPs/W (for an AI edge inference accelerator).
Although they are targeting most edge applications (autonomous vehicles, smart cameras, smartphones, drones, AR/VR), I got the sense that they have a special affinity toward the automotive segment. This makes sense since their performance is among the highest that I have seen (among edge inference chips) and no other edge application values performance more than the automotive segment. Alternatively, they may have been inspired by Intel’s acquisition of Mobileye for $15B. These are merely my guesses and both reasons are perfectly valid in my book.
Aside from the technical mumbo jumbo, their unwavering focus, infectious optimism, and determination impressed me the most.

Al Gharakhanian
Check out prior posts here
 
Back
Top