Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/latest-22-20-14nm-performance-benchmarks.6916/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Latest 22/20/14nm Performance Benchmarks

benb

Well-known member
PCMark for Android (December 2015)
BAPCo Tabletmark (latest)

Update: Results
iOS 9.2 tablet (most likely an iPad Pro) outperforms x7-8700 but not m3 (Intel Core based on Skylake) tablets. Apple A9x has thus ascended into elite-tablet/desktop processor performance range, according to Tabletmark.

Samsung 14nm device (Exynos 7420) outperforms Intel x7-z8700 by 23%
Qualcomm 808 devices lag the best from Intel, Samsung, Apple and Nvidia.

Tegra, running on 20nm TSMC processes, is a top performer (Nexus 9 places 4th overall) and should have more market share than it does. Perhaps 16nm Tegra will win more marketshare.

Intel Atom benchmark results are mixed. Although Intel 14nm has a poor showing vs. Exynos Intel 22nm Atom Z3580 ranks high (3rd place). Puzzling.
 
Last edited:
Was Apple not included? I thought the A8 in my iPhone6 was pretty fast. And I can tell you the A9x in my iPad Pro absolutely screams! You also have to wonder how big of a role the software plays in this. From what I have read some phone vendors change Android to specifically get better numbers?

My conclusion: Since Apple controls both the silicon and software they have a distinct advantage when it comes to the "user experience" so they don't have to pander to the bench marking horde.
 
Hard to run an Android benchmark on iOS, I'd imagine. And Apple has zero incentive to make that arithmetic easier.
 
@Daniel: Benchmarks should be regarded skeptically of course but I don't look at Apple's lack of transparency as an advantage. Optimized benchmark results can be detected (I believe Futuremark simply rejects the data from their database when they detect it; this builds my confidence in the results). And like a Chipworks teardown, the truth is there, and can't be hidden, obscured or manipulated.

I am still waiting on some TSMC 16nm FF or FF+ devices to pop up in this database. Maybe next month...Any predictions where 16FF/FF+ will place?
 
Update: Daniel I found an iOS/Windows/Android cross-platform benchmark, TabletMark. I added this link to the thread topping post. The new Skylake Core m3 outperforms the best iOS (assuming that is A9x) by about 25%.
 
Last edited:
Apart from everything else here, I'd point out that anyone taking TabletMark seriously really ought to read the BaPCo whitepaper about it. What I find particular problematic is the compiler options it uses.
These are, and I quote:
[table]<colgroup><col style="width: 21.523157%"><col style="width: 17.649918%"><col style="width: 60.826925%"> </colgroup><tbody>[TR] [TD="bgcolor: rgb(0.000000%, 0.000000%, 0.000000%), colspan: 3"] [/TD] [/TR] [TR] [TD="bgcolor: rgb(0.000000%, 0.000000%, 0.000000%)"] [/TD] [TD="bgcolor: rgb(0.000000%, 0.000000%, 0.000000%)"] [/TD] [TD="bgcolor: rgb(0.000000%, 0.000000%, 0.000000%)"] [/TD] [/TR] [TR] [TD] Android


[/TD] [TD] defaultincluded withAndroid NDKr10c


[/TD] [TD] Compiler options:
-O3 (enables many general optimizations)
--ftree-vectorize (implied by -O3)
--ffast-math (enables common math optimizations for code
that doesn’t require strict IEEE compliance)--fomit-frame-pointer (implied by -O3, reduces memoryconsumption to support lower RAMe.g., 1 GBdevices)


[/TD] [/TR] [TR] [TD] iOS


[/TD] [TD]
page24image16616
defaultincluded withXcode 6.0.1


[/TD] [TD] Compiler options:
default
(-Os and Automatic Reference Counting)



[/TD] [/TR] [TR] [TD] Windows


[/TD] [TD]
page24image22472
defaultincluded withMicrosoftVisual Studio2013


[/TD] [TD] Compiler options:
default (/O2) + /Oi (generate intrinsic functions)


[/TD] [/TR] </tbody>[/table]

Note how Android is using the most aggressive optimizations, iOS the least aggressive, Windows in between.
This strikes me as making the benchmark misleading for many purposes.
You can argue that the default compiler options represent some sort of "average user experience", but my suspicion is that for code that actually MATTERS, developers on each platform do rather better than just use the defaults. The defaults don't matter when you're using your Citibank app or diet-logging app (and in that case, I'd argue Apple's default is the most sensible), but for cases where performance does matter, I would imagine that the top tier game developers, Chrome, and Apple's internal developers, to give a few examples, are being rather more aggressive in their choices.

The point is not to slander BaPCo --- different benchmarks have different uses --- but to point out that TabletMark is not an especially useful metric of CPU performance. It is VERY MUCH an entire system metric. (Even as an entire system metric, I'm not at all convinced that it works well, but I don't know enough to comment on that. In particular what it is TRYING to do, as far as I can tell, is simulate a stream of user events into apps, and then measuring how long it takes some process from "simulated event" to "process completion". The problem is that if the OS does not provide specific hooks allowing one to know when the supposed process being timed is complete, then you have to use proxies, and those proxies may be problematic. And iOS seems like the sort of place that is NOT going to be favorable to this sort of monkeying around and trying to fake user behavior...)

To compare base CPU performance, I consider GeekBench to be a lot more helpful because it is substantially less subjectto these problems.
 
Back
Top