SiMa.ai just announced that they achieved first silicon success on their new MLSoC, for AI applications at the edge, using Synopsys’ design, verification, IP and design services solutions. Notably this design includes the Synopsys ARC® EV74 processor (among other IP) for vision processing. SiMa.ai claim their platform, now released to customers, is significantly more power efficient than competing options and provides hands-free translation from any trained network to the device. (Confirming an earlier post that software rules ML at the edge.) The company has impressive funding and experienced leadership so this is definitely a company to watch.
Strong Vision ML Starts with Strong Imaging
In modern intelligent designs AI gets the press but would be worthless if presented with low-quality images. A strong image processing stage ensures that, between the camera and the ML stage, images are optimized to the greatest extent possible. Particularly to meet or exceed how the human eye – still the golden reference – sees an image.
A dedicated ISP stage can get pretty sophisticated, up to and including its own elements of machine learning. Note: I don’t know how much if any of the Synopsys ML support is included in the SiMa solution or the range of EV74 ML capabilities they use. You will have to ask SiMa those questions.
ISP functions include de-mosaicing which compensates for the raw pixel-based image sensor, overlaid by a color filter array, interpolating a smooth image from that pixelated input. Especially in surveillance cameras, fisheye lenses require compensation for geometric distortion, another ISP function. Add to this list de-noising, color balance and a host of other options, essential when matching to similarly compensated training images.
I personally find high dynamic range (HDR) to be one of the most interesting ISP adjustments, especially for AI apps. The opening images for this article illustrate an example HDR application. On the left is an image after other compensations not including HDR. The right image is HDR compensated. Many ISP functions optimize globally; HDR is a local optimization, balancing between bright areas and darker areas in an image. Before compensation, features in low light areas are almost invisible. After compensation, features are clear across the image despite a wide range of brightness. This is critically important for ML to detect say a pedestrian stepping off a sidewalk in a shaded area on a bright day.
While Synopsys doesn’t directly provide application software, they do offer a set of tools that designers use to create and optimize their own ISP software. The Synopsys MetaWare Development Toolkits support C/C++ and OpenCL C programming as well as vision kernels to ease application development. For those of you who don’t have applications expertise in these areas, there are also open-source solutions 😊
Intelligence in image signal processing
The EV74 processor optionally supports ML processing. I suspect this isn’t relevant to the SiMa application, but it is relevant to image processing, even before you get to object identification. Super-resolution methods aim to construct a higher resolution image from a lower resolution input using one of many possible neural net techniques. Consumer and medical applications often apply super resolution for graphic enhancement, using learning to infer reasonable interpolation pixels between existing pixels.
The EV74 DNN option can handle more than just that application. It supports direct mapping from the Caffe and TensorFlow frameworks, and the ONNX neural network interchange format. Edge AI in many applications demands a single chip solution. EV74 can support a standalone implementation (with appropriate memory and other functions in the SoC). Or integrated together with value-added specialist functionality like that from SiMa.ai.
What is coming next in solutions?
I talked more generally with Stelios Diamantidis (Distinguished Architect, Head of Strategy, Autonomous Design Solutions at Synopsys). He mentioned that edge applications are inherently heterogeneous, as data travels from optics, to sensors, to compute, to memory, to display, etc. Maintaining low end-to-end latency across the system is contributing to the chiplet movement. One example application is for drones which demand fast response times to avoid obstacles. He also sees big pickup in industrial applications, for example LIDAR sensing in production lines to control grippers. Either case requiring strong vision and AI to support the performance requirements of increasingly complex neural network models in SoCs.
Stelios added that between industrial and vehicle applications, such designs must be robust to a lot of environmental variation. Design methods and standards prove this in part . Complemented always by an established track record in design. And supported across a wide variety of applications, from semiconductor leaders to pioneers.
Very interesting stuff. You can read the SiMa.ai press release HERE.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.