Presto Engineering recently held a webinar discussing vision chip technology – what a vision chip is, what are the applications and how can you optimize its use. Samer Ismail, a design engineer at Presto Engineering with deep domain expertise in vision chip technology was the presenter. Samer takes you on a very informative journey about image processing and where vision chips fit. At first glance, a “low resolution” vision chip sounds like a way to compromise a design. In fact, it is a way to optimize machine vision applications.
I will take you through some of the insights offered by Samer during the webinar. I highly recommend you view Delta’s entire low resolution vision chip webinar here, the entire event is 40 minutes with an excellent Q&A session – you will learn a lot.
The topics covered in this webinar are as follows. I’ll cover a bit of detail on each one to whet your appetite.
- What is a vision chip?
- Why low-resolution vision chip?
- Vision chips in standard CMOS process
- Working principle
- Vision algorithms
First, what is a vision chip? It’s NOT just a CMOS image sensor. Rather, a vision chip has the ability to capture an image (with a CMOS image sensor typically) and also perform analysis on that image with a combination of analog and digital circuits to extract information about the image. This reminded me of edge vs. cloud processing. Getting the processing closer to the data source has some significant advantages and that’s what is going on with a vision chip. Information on the ubiquity of vision chips was surprising to me. You’ll need to watch the webinar to judge for yourself.
Why use a low-resolution vision chip? Simply put, latency, power, cost and area all benefit from using a low-resolution device. Think analyzing a 64 X 64-pixel image vs. a one-megapixel image. The benefits can be substantial if your application can fit a low-resolution profile. The types of applications that benefit are discussed during the webinar.
Typical high-resolution image sensors use a specialized process, one that produces bigger and more expensive sensor dies. These processes typically don’t support non-volatile memory. So, if you plan to capture an image and use an embedded processor on the same die to analyze it, this is going to be difficult without embedded memory. If you are in the low-resolution domain, these problems go away since you can use standard, lower cost manufacturing processes. There are several other benefits of a lower cost process as well, described in the webinar.
An architectural overview of Presto’s new Heimdal 2 vision chip is then provided. The elements of the architecture, its flexibility, capabilities and features and how to apply the device to various vision tasks are all discussed. Samer goes into substantial detail here. The Heimdal 2 is available on an evaluation board and an example application using the device is presented. A methodology to use Heimdel 2 to develop vision applications is also reviewed by Samer, using well-known algorithms as examples. This part of the webinar is a very good tutorial on image processing algorithms and how to implement them in hardware, with a special focus on the low-resolution applications and benefits.
Samer ends his presentation with two use case examples of a low-resolution vision chip – finding a vacant parking space and monitoring shopping behavior in a store. The webinar concludes with a short, but useful Q&A session. If machine vision is of interest, I highly recommend you watch this webinar.