Artificial intelligence (AI) is reserved for companies with hordes of data scientists, right? There’s plenty of big problems where heavy-duty AI fits. There’s also a space of smaller, well-explored problems where lighter AI can deliver rapid results. Flex Logix is taking that idea a step further, packaging their InferX X1 edge inference accelerator chip in a turnkey vision solution. The best part: it’s pre-trained for specific use cases, so users don’t need any AI expertise to get running.
Lined up for common object detection use cases
We’re not talking about self-driving rocket science. Vision technology is now robust enough to detect sizable objects, like cars through a checkpoint, or people in a room or walkway. Detection accuracy for low-velocity targets in controlled lighting conditions is very high. Scalability is also easy; one or several cameras can interface over Ethernet, USB, or fast Wi-Fi and be brought to one system for processing.
Still, teams who don’t work with AI everyday struggle with implementation. They must find hardware and software, figure out the right AI model for detecting objects, and find or create an AI training data set. Then, they need to put all that together and verify their application works. It can be a very long path to successfully train an application, even for those with AI experience.
What if the training part were already done in a turnkey vision solution? By hand-picking use cases and creating inference software, the EasyVision solution comes ready for a live image feed. Some applications Flex Logix is working on, with a goal of adding two new ones per month:
- Workplace safety – checking people entering a facility for visible gear such as hardhats can be automated.
- People counting – retailers, schools, and event centers can count how many people enter a building, occupy a specific room, or pass through an area.
- Health monitoring – face mask compliance checks are easy with vision detection.
- Vehicle access – how many cars, how many open spaces, and how long each car has been inside a parking facility are also easy detection tasks.
YOLO-based recognition at up to 60fps in less than 10W
Running AI inference efficiently is also a big piece of the equation. Quite a few convolutional neural network (CNN) algorithms can do object detection. YOLO (in this case, You Only Look Once) is a one-stage detector algorithm which finds regions and classifies objects in one pass. The result is excellent real-time object detection performance. YOLO continues evolving, with recent versions improving frame rate without compromising accuracy.
YOLO also maps cleanly to the InferX X1 chip, designed for efficient low-power AI inference – not video gaming. Its tensor processor units, or TPUs, are tiled and reconfigurable dynamically for many CNN models. In an AI development workflow, a customer would use the InferX DK tool chain to compile their preferred trained model into the InferX X1. In the EasyVision solution, Flex Logix has already done that work for the YOLO algorithm and object training data sets.
The EasyVision solution runs object detection at up to 60fps HD images from multiple cameras in real-time, using less than 10W of power. The InferX X1 chip is comes on either a PCIe or M.2 card, allowing installation in many hosts – including Dell and HPE platforms. Users get software to install pre-trained object detection models of choice. There’s also a software API for integrating detection results into a high-level application.
As the portfolio of EasyVision-trained applications expands, more users will see the power of a turnkey vision solution. EasyVision gets a vision-enabled object detection application off the ground with no AI learning curve. Teams looking to launch a broader AI initiative may want to start with an EasyVision package to pilot a concept. Then, they can step up to creating models and configuring the InferX X1 chip, leveraging its low-cost, efficient AI inference.
For more info, please visit the Flex Logix EasyVision webpage.Share this post via: