From the perspective of all time for recorded human history, the last 300 years (a blink on that time scale) has seen incredible, life-changing and world-changing advances. Water and steam-driven machines first showed up in the 1700s. This is often called Industry 1.0. Powered assembly lines in the late 1800s became Industry 2.0. Automation/computers started Industry 3.0 in the late 1960s. Artificial Intelligence (AI) is now driving Industry 4.0. This timeline should be familiar to most. A lot has been written about it. What is interesting is what comes next. That is the topic of a recent white paper from Achronix. I found the perspective offered in this piece to be quite enlightening and fresh. Without disclosing too much of its content, let’s examine how robotic precision and human creativity will write the next chapter.
How Does AI Fit?
AI is a rather broad term. It’s defined by a branch of computer science that aims to emulate human behavior algorithmically. Focusing a bit more, we see a lot of references to machine learning (ML), a subset of AI that uses statistical models derived from data. Focusing further, deep learning (DL) utilizes neural networks to perform inferencing. These systems can also be adaptive, i.e., learn.
Achronix’s white paper treats the various regimes of AI and puts them in perspective regarding a timeline of innovation. As we approach the present day, an important observation/prediction is made.
Adaptive AI algorithms, most certainly DL algorithms, will not only learn on their own, but also interpret real-time inputs from human beings. This ability to adapt in real time with minimal latency will be essential.
This observation puts a great perspective on AI going forward. It’s not about replacing humans. It’s about leveraging their insights to create better results.
Keeping Up with the Data
Achronix then discusses the environment, ecosystem and technology needed to build successful AI deployment. We all know that 5G networks and pervasive IoT devices are creating an explosion of data. Keeping up with the processing demands of that data is the key to success. But performing this rather daunting task in a commercially viable way is far easier said than done.
Throwing more servers at the problem is one approach. This has been the plan of record for many years. One size doesn’t fit all, and this approach will drive CAPEX and OPEX so high that commercial viability will no longer be within reach. Specialization is the key to taming this challenge. Enter data accelerators. The white paper points out that, depending on the data accelerator type and the workload, the computational ability of a single data accelerator on one server can do the work of as many as 15 servers, drastically cutting down the CAPEX and OPEX.
This is clearly the way forward. If we consider CPUs as the baseline approach to data crunching, there exist three architectures to take us to the next level: GPU, FPGA and ASIC. Each of these approaches occupies a spectrum of programmability, customizability and cost, as shown in the figure below.
A Surprise Ending
Here is where the white paper takes a very interesting turn. What if you could achieve the efficiency and optimality of an ASIC with an off-the-shelf device, costing far less? It turns out Achronix is delivering on this dream. A combination of FPGAs, embedded FPGA IP, a unique 2D network on chip (NoC) architecture and a large array of very high-speed interfaces put this goal within reach.
I learned a lot about AI, it’s deployment and some unique ways to implement it in this white paper from Achronix. If AI is part of your next design project, I highly recommend reading it. You can get your copy here. You’ll learn how to implement AI in a cost-effective manner. You’ll also learn what the future may look like as robotic precision and human creativity write the next chapter.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.