WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 139
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 139
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 139
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 139
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

Disturbances in the AI Force

Disturbances in the AI Force
by Bernard Murphy on 01-03-2019 at 7:00 am

In the normal evolution of specialized hardware IP functions, initial implementations start in academic research or R&D in big semiconductor companies, motivating new ventures specializing in functions of that type, who then either build critical mass to make it as a chip or IP supplier (such as Mobileye – intially) or get sucked into a larger chip or IP supplier (such as Intel or ARM or Synopsys). That was where hardware functions ultimately settled, and many still do.

22776-disturbance-force-min-l1l.jpeg

But recently the gravitational pull of mega-companies has distorted this normally straightforward evolution. In cloud services this list includes Amazon, Microsoft, Baidu and others. In smartphones you have Samsung, Huawei and Apple – yep, Huawei is ahead of Apple in smartphone shipments and is gunning to be #1. These companies, neither semiconductor nor IP, are big enough to do whatever they want to grab market share. What they do to further their goals in competition with the other giants can have major impact on the evolution path for IP suppliers.

Talking to Kurt Shuler, VP Marketing at Arteris IP, I got some insight into how this is changing for AI IP. Arteris IP started working with Cambricon, a Beijing-based startup in fabless AI IP/devices, some time ago. Based on that work Arteris IP built the FlexNoc AI package I wrote about recently. Cambricon is a very interesting company for a number of reasons. One is that they took one of those “gee, why didn’t we think of that?” approaches to designing a platform for neural net (NN) implementations. They developed an optimized instruction set architecture (ISA) based on analysis of multiple NN benchmarks. Then they leveraged this into a design win with Huawei/HiSilicon. This company is attracting attention; including their current series B round, they have raised $200M to date.

The deal with Huawei/HiSilicon led to the IP appearing in the Huawei Kirin 970 smartphone chipset. But Huawei/HiSilicon decided to build their own neural processing unit for the Kirin 980, now in production (also apparently the first 7nm product in production). In other words, this piece of technology was so important to Huawei, they decided to ditch their IP supplier and make their own. Weep not for Cambricon though. They’re already on their next rev and squarely targeting the datacenter AI training applications for which NVIDIA is so well known.

On the cloud side, consider Baidu who are effectively the Google of China. Just like Google, they have also been working intensively on AI, for many of the same reasons such as image search and autonomous driving but also for some reasons closer to Chinese government interests such as intelligent video surveillance. Baidu started in AI working with FPGAs and (apparently) licensing IP. More recently they too developed their own AI chip, Kunlun, in 14nm and seem set to continue on this path.

As a reminder, these high-end AI systems depend on highly custom 2D-architectures of many NN-dedicated processors connected in specialized configurations such as grids or tori, with memories/caches embedded within this structure, along with other distributed services to accelerate common functions like weight updates. In these architectures, the network (NoC) connecting all of these functions becomes critical to meeting performance and other goals, which is why Arteris IP is so involved with these companies.

Another interesting aspect of the Baidu direction is that they are targeting their AI devices and corresponding software to a pretty wide range of applications. One application is certainly NN training in the datacenter, potentially replacing NVIDIA and a counter to the Google TPU. A recurring theme and perhaps a wakeup call for suppliers who thought they had a lock on sockets. But they are also planning use for inference in the datacenter, a new one on me. Apparently, a lot of this is still happening in the datacenter despite enthusiasm for moving AI to the edge, perhaps especially for IoT devices in China where IoT is taking off arguably faster than anywhere else. And Baidu have big aspirations for automotive and home automation. Which means they want an architecture they can scale across this range. Reminds me of what NXP is doing with their eIQ software.

So more big companies investing in their own AI hardware, for very logical reasons. They feel they have to manage the architecture to meet their own plans across a diverse range of applications. It also can’t have escaped your attention that virtually every company I have talked about here is Chinese. A lot of money is going into AI in China, internally in big companies and from venture funds. Another company in this class is Lynxi, also targeting an architecture for both training and inferencing in the datacenter. Lynxi are apparently are backed by serious funding though details seem difficult to find.

Overall, more big companies are building their own AI chips and more small companies are popping up in this area. And a lot more of this activity is visible in China. A disturbance in the force indeed. Arteris IP is closely involved with many of these companies, from Cambricon to Huawei/HiSilicon to Baidu to emerging companies like Lynxi, offering their network on chip (NoC) solutions with the AI package allowing for architecture tuning to the special needs of high-end NN designs. Check it out HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.