Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/samsung-exynos-9820-8nm-dedicated-npu.10934/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Samsung Exynos 9820: 8nm, dedicated NPU

Fred Chen

Moderator
Samsung announced its Exynos 9820, which for the first time has a dedicated NPU: Samsung's next-gen Exynos 9820 chip adds dedicated AI for its 2019 phones. Manufactured on 8nm, it will go head-to-head against Apple's A12 and Huawei's Kirin 980, which are on TSMC 7nm. The Apple A12 also includes neural hardware: iPhone XS - A12 Bionic - Apple. The Huawei Kirin 980 has a dual-NPU:Huawei promises its Kirin 980 processor will destroy the Snapdragon 845 - The Verge. It looks like NPUs are the new thing to be included in processors.
 
Last edited:
I am wondering why Samsung didn't manufacture this chip with its newly announced 7nm EUV process to really compete against 7nm A12 or Kirin 980?
 
There are neural processing units now, that's incredible.
 
However I suspect it will take some time before NPUs have THAT much effect on our everyday computing (ie what Joe Average User gets out the box or via some high profile apps).
What we have now, realistically, appears to be primarily image recognition in a few limited scenarios, primarily FaceID, and photo labelling.

Where do we go from here? A few concepts have been suggested (many around image recognition) but I'm unaware of them actually shipping. For example connections to AR have been suggested (recognize the items and people in a live video stream), but that's not yet a product, just a cute demonstration. Likewise, ala _Silicon Valley_, "the Shazam of Food" might one day be useful in counting calories purely by having the camera recognize the food (AND it's quantity...) on your plate, but so far that's an idea, no more.

Many other things have been suggested -- translation, sentiment analysis, language understanding, ...; but as far as I know these either don't yet work THAT well, or aren't available to consumers, or don't actually need an NPU (especially an on-device NPU --- maybe they use a TPU back home in the data center).
For example, yes, Siri (and Bixby and the rest of them) engage in some sort of pattern recognition on the device to do various useful things (like suggest the next few apps you might want to run), and (at least in the case of Siri) these are generally useful and generally accurate. But I see no evidence that they especially require an NPU; the data volumes are small, and they worked as well as they do today on pre-NPU iOS devices.

I'm not poo-poo'ing the idea of NPUs. And even if all they did was FaceID, well, FaceID is pretty damn magical and worth having! I AM suggesting that I've heard little in the stream of hype that I find convincing as realistic uses for NPUs *TODAY* beyond what I have given.
It took us a remarkable amount of time to figure out generically useful ways to exploit GPUs beyond just video-games; I suspect the same might be true for NPUs, that it will be 5 years or more before we see them doing anything that's ACTUALLY interesting (again, beyond what I've already listed).
 
There's an AI app for that!

However I suspect it will take some time before NPUs have THAT much effect on our everyday computing

I am sure there will be a thousand cool little apps out there to entertain us with their pre-trained AI inference models.

Makes me want to take another stab at getting my machine learning algorithm to work.

Neural Networks API | Android NDK
| Android Developers


I attempted to train a neural net a year ago or so, but quickly found how hard it is to keep the neural net from getting 'stuck'.
There are all sorts of techniques to prevent it from getting stuck, and they work to some degree. The problem is it takes forever to
debug where/why a multi-layered neural net gets stuck on something.
Kind of the same way my brain gets stuck thinking in the same old patterns.
 
Back
Top