By now, you should know about AI in the cloud for natural language processing, image ID, recommendation, etc, etc (thanks to Google, Facebook, AWS, Baidu and several others) and AI on the edge for collision avoidance, lane-keeping, voice recognition and many other applications. But did you know about AI in the fog? First, a credit – my reference for all this information is Kurt Shuler, VP Marketing of Arteris IP. I really like working with these guys because they keep me plugged in to two of the hottest domains in tech today – AI and automotive. That and the fact that they’re really the only game in town for a commercial NoC solution, which means that pretty much everyone in AI, ADAS and a bunch of other fields (e.g. storage) is working with them.
Now back to the fog. In simpler times we had the edge, where we are sensing and actuating and doing a little compute, and the cloud to which we push all the heavy-duty compute and which serves feedback and updates back to edge nodes. It turns out that this two-level hierarchy isn’t always enough, especially as we introduce 5G. That standard will introduce all sorts of new possibilities in electronification, but it doesn’t have the same range as LTE. We can no longer depend solely (or even mostly) on the cell base stations with which we’re already familiar; to be effective 5G requires mass deployment of small-cell stations connecting to edge nodes and handling backhaul either through the core wireless network or via satellite. These small-cell nodes are the fog.
AI has already gained a foothold in these fog nodes to better optimize quality of MIMO communication with mobile (or even stationary) edge nodes. MIMO quality depends on beamforming between multiple antennae at the base station and the user equipment (UE). Figuring out how to optimize this at any given time through link adaptation and how to best schedule transmissions at the base station to minimize interference between channels, these are complex problems which increasingly look like a good fit for AI. There are other AI applications too, in managing intermittent reliability problems and in intelligently automating network slicing.
Once you have AI support in a fog node, it’s not a big leap to imagine providing support to the edge nodes it services. But haven’t we all been arguing that AI is moving to the edge? Why do we need support in the fog? Yes, AI is moving to the edge but it’s a constrained form of AI. In voice command recognition for example, an edge node can be trained to recognize a catalog of commands, even phrases if they’re relatively short (I’ve heard up to ~10 words). If you want natural language recognition for more open-ended command possibilities, you have to go to the cloud, which can handle the complexity of the task but has its own downsides – latency and security among others. Handling tasks of intermediate complexity in the fog (without needing to go to the cloud) could look like an attractive proposition, certainly to the operators who will probably charge for use of that capability.
All interesting background but what does this have to do with us in the design world? Network equipment makers are increasingly returning to custom design to provide at least some of these capabilities, (and indeed other needs in the rapidly booming 5G domain, such as supporting end-to-end private wireless networks). The chip suppliers who feed these companies are racing ahead too. Nokia, Ericsson and Qualcomm all have positions on 5G and AI. Which means that AI-centric design will boom in this area.
I don’t know if the operator equipment companies will use standard AI chips (Wave Computing, Movidius, ..), adapted over time to their needs or will build their own. Either way, I do expect a boom in 5G-centric AI applications, especially for these fog nodes. Which will mean increased demand for AI-centric SoC design with need for highly customizable on-chip networks within accelerators, cache coherence between the accelerator(s) and IPs on the SoC and super-fast connectivity to off-chip high-bandwidth memory or GDDR6. In other words, all the capabilities that AI leaders like Wave Computing, Movidius, Baidu, Google, etc. etc. (it’s a long list) have been building on and continue to build on with Arteris IP capabilities. Such as Ncore NoCs, the AI package, FlexNoC and CodaCache. Check them out.Share this post via: