WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 140
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 140
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 140
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 140
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

Segmenting the Machine-Learning Hardware Market

Segmenting the Machine-Learning Hardware Market
by Bernard Murphy on 03-13-2019 at 12:00 pm

One of the great pleasures in what I do is to work with people who are working with people in some of the hottest design areas today. A second-level indirect to be sure but that gives me the luxury of taking a broad view. A recent discussion I had with Kurt Shuler (VP Marketing at Arteris IP) is in this class. As a conscientious marketing guy, he wants to understand the available market in AI hardware because they have quite a bit of activity in that space – more on that later.

23102-market-segmentation-min.jpeg

So Kurt put a lot of work into finding every company and product he could that is active in this space, 91 entries in his spreadsheet. This he broke down by company, territory (eg China or US), product, target market (eg vision or speech), implementation (eg FPGA or ASIC), whether the product is used in datacenters or at the edge and whether it is being used for training or inference. I’ll share some interesting observations from the list but not the list itself. Kurt told me he put a lot of work into building this list, so I can’t imagine he be excited about giving it away :cool:.

I want to be clear up front that this analysis is based on companies, large and small, and products, also large and small. It is not based on investment $$ or revenue, underestimating the impact of hyperscalars and a few others like NVIDIA. So this is necessarily an incomplete analysis but still an interesting indicator in terms of how many organizations are chasing AI hardware opportunities.

Let’s start with territories. Kurt found 28 products in China from 20 companies. In the rest of Asia, Japan has just 4 entries and Korea shows just one, KAIST, which I think may reflect difficulty in finding details on Samsung, LG, etc products since I know they are active in AI. North America shows 38 entries from 32 companies and EMEA (Europe, Middle East and Africa) has 20 entries. Bottom line – China, North America and EMEA are all very active and roughly comparable in product and company count, while Japan lags significantly, and Korea probably doesn’t want to share a lot of information.

Breakdown by target market is more challenging since over half fall into a “general” category – they want to sell to all markets. A little more informatively, 12% fall into what ARM would call infrastructure – from cloud (HPC, servers, storage), to connectivity (5G), mostly in North America, then Japan (who seem to focus their limited investment exclusively in this area) and a few in EMEA. About 10% are clearly targeted to automotive apps, mostly in North America, again with a few in EMEA and one in China. After that, there’s a sprinkling of target markets around smart phone, vision, camera and surveillance, mostly in China with a couple in EMEA.

On implementation, nearly 80% of the solutions are ASIC, 11% are IP and 9% are FPGA, some for neuromorphic implementations. Kurt said he often sees FPGA implementations migrating to ASIC for all the usual reasons. The spread is pretty uniform geographically, at least for ASIC and IP (there are too few FPGA examples to support geographic conclusions).

50% of applications are on the edge and 40% in the datacenter (you might have thought this gap would be wider); the rest are in both edge and datacenter applications. Another useful breakdown is how many products target training versus inference. Unsurprisingly, 60% goes purely into inference and only about 6% purely into training, the balance going into products which address both. In inference, the split is pretty even geographically but almost all the training product interest is in North America.

One last split: on the edge, focus is almost exclusively on inference, whereas in the datacenter the bulk of interest is in both training and inference (pure training remains a small percentage). Why inference in the datacenter? Even for datacenters, the bulk of the hardware opportunity is still in inference per a recent McKinsey report. Not entirely surprising; training needs are compute-intensive but infrequent. Inference needs may be less compute-intensive, but demand is non-stop.

What does all of this have to do with Arteris IP? They play a significant role in ASIC applications in almost all of these domains. They’re in Mobileye and Mobidius, Baidu, Huawei and Cambricon, NXP, Toshiba, Dreamchip, Horizon Robotics, Bitmain, Canaan Creative, Wave Computing and Intellifusion. All AI applications in which their NoC interconnect either connects an AI accelerator to a cache-coherent CPU subsystem (using their Ncore cache coherent interconnect) or deep in the fabric of the accelerator itself in advanced datacenter applications (using their FlexNoC AI package).

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.