Dual-core ARM Cortex-A15 faster than Intel Atom and quad-core Cortex-A9

December 3, 2012

    Samsung-Exynos-5-DualWhen Apple released the iPhone 5 it didn’t contain a quad-core CPU as many people had expected, however Apple did claim that the new phone was up to twice as fast as its predecessor. The most likely reason that Apple could make such a claim is that its A6 CPU uses the Cortex-A15 architecture. With the recent release of Google’s new Chromebook, which is powered by Samsung’s Cortex-A15 based Exynos 5 Dual (Exynos 5250), the benchmarking gurus have now had a chance to really test the new architecture from ARM and the results are amazing.

    The Exynos 5 Dual is a dual-core 1.7 Ghz Cortex-A15 CPU that boasts a 12.8 GB/s memory bandwidth.  We already know that it is faster than Apple’s A6 but more recent tests run on the Chromebook using the  Phoronix Test Suite show that the CPU is faster than Nvidia’s quad-core Tegra 3 and at least two different Intel Atom models (the N270 and the Z530).

    Since the Chromebook can be made to run Ubuntu, it means the Linux-based Phoronix tests can be executed on the same OS and attempt to use a level playing field. This means the results should be more accurate when compared to more subjective browser based tests like the Sunspider benchmark.

    In the tests the Exynos 5 Dual was pitted against the quad-core Tegra 3 running at 1.4Ghz, the single-core Intel Atom N270 running at 1.6Ghz along with its cousin the Z530. These first three CPUs didn’t pose any real threat to the Exynos 5250. However some more worthy opponents were also thrown into the ring: the dual-core Intel Atom D525 running at 1.8Ghz and the 2.13Ghz Core i3-330M.

    So how well did the Exynos 5250 perform? The test suite includes many different types of real-world and computational benchmark tests and in general the Exynos 5250 is upto twice as fast as the Tegra-3  and three times as fast as the two single-core Intel Atom chips. It is also roughly on par with the dual-core Intel Atom D525. However when compared to the i3 CPU the  Exynos 5250 has a long way to go, but that isn’t surprising since the i3 is running at over 2GHz and, although called a mobile processor, it isn’t really suitable for the same applications as the ARM based chips.

    Taking the real-world H.264 video encoding test as an example, the Exynos 5250 manages 10.62 frames per second, the Tegra-3 8.13 fps, the N270 and Z520 can do 5.08 and 4.93 fps respectively while the D525 marginally beats the Exynos 5 Dual with 11.61 fps. The i3 easily beats the rest with 38.84 fps.

    For pure computational tests the Exynos 5250 is beaten only by the i3. Under the Monte Carol flops (floating-point operations per second) test, the  Exynos 5250 managed 167.9 Mflops, the Intel chips (N270, Z530, D525) managed 47.98, 48.2 and 65.15 Mflops and the i3 stole the show with 260.62 Mflops.

    Since the  Exynos 5 Dual is starting to enter into more mainstream devices, like the Nexus 10, it looks like the  Exynos 5250 is the processor to beat.

    Comments

    • HellG

      “The most likely reason that Apple could make such a claim is that its A6 CPU uses the Cortex-A15 architecture”
      As far as i know (correct me if i’m wrong please) that the A6 uses pretty much a qualcomm Krait-like design which is taking the best of both worlds, some from A9 design and some from A15 design but it still technically considered a A9 chip

      • hrrseh

        You’re right. They didn’t uses the A15 architecture, but made their own based off it like what Qualcomm does with their Snapdragon line. It would still be inferior than the real A15.

        • kascollet

          That’s not right. Scaled down to the same frequency (as the A6 running in iPhone 5), Apple’s Swift architecture is still faster than a dual-core A15 Exynos CPU-wise, and miles ahead in GPU and memory bandwidth.

          • HellG

            The problems are
            1.All new nexus devices gave OK score in benchmarks so its more like optimizing the device for these kind of programs? at the end of the day what you want is REAL performance ignoring few lags here and there due to the Android 4.2 buggy base the performance is actually great and all lags should be ironed with future updates and for the GPU PowerVR always does amazing job but you have to remember the HUGE deference between screen resolutions that bottleneck the heck out of the GPU, you are comparing apple to oranges here you should compare apples to apples (pun intended :))

      • Roberto Tomás

        They use a modified cortex A9… so yea it sorta is correct. The modifications include back-porting the memory subsystem from the A15 — something they got thanks to Samsung.

    • xoj_21

      And there is a quad core versión on theway

    • Roberto Tomás

      Let’s not forget intel’s hyper-threading .. that 330m acts as 4 cores at the operating system level. Quad core ARM chips should be used when an Intel dual core is using hyper-threading, just to keep the comparison fair.

      A 4-core A15 should easily beat all the Atoms (the next Galaxy Tab is rumoured to be a 4 core with the Mali T658 GPU, and is expected around CES next month).

      Some companies like Nvidia are even considering an 8-core A15 for laptops, mini pcs, and maybe tablets. An 8-core A15 , from those numbers on H.264, should compete with the i3-330m .. that’s only 2 years behind Intel at this point: better than AMD. Way to go ARM!

      • http://www.facebook.com/people/Killswitch-Engage/100002416383925 Killswitch Engage

        No, the SMT is at least 5x slower than a full core. In the best case scenario, you’re only getting a 20% gain with SMT(hyper-threading) as all it does is recycle wasted threads. It doesn’t create new execution resources like floating-point and integer units, which are still tied up.

        • Roberto Tomás

          @KE – I read anandtech say that in intel’s case it is more like 60% .. at that I think it was not ideal. optimal is better. Worst case scenarios are more in line with what you are saying.
          The idea is that most of the time you can do without any scheduling concerns at all, and it only gets worse than a full 100% in the case of collisions.

          • http://www.facebook.com/people/Killswitch-Engage/100002416383925 Killswitch Engage

            Do you have a source?

            2500k(no SMT)
            2600k(SMT)

            http://www.anandtech.com/bench/Product/287?vs=288

            Nowhere in that graph does the 8 thread 2600k double a 2500k… Not even close.

            • Roberto Tomás

              percentages don’t really matter, what matters is availability of CPU time for threads. To the operating system hyperthreading looks like/is the same as twice as many cores. The fact that intel has to make less efficient “cores” in order to use SMT certainly hasn’t stopped them from doing it.

    • Kagnon

      Is the video encoding even a fair test?

      Didn’t Intel have Quick Sync for that i3?

      http://www.intel.sg/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html

      But then again i3 demolishes the exynos in most of the tests anyway.

    • http://profiles.google.com/gigastrash rad asds

      Why they are comparing it against outdated Atom chips ?

      Atom Z2460 (Medfield) has higher higher clock than N270 & Z520

      Atom Z2760 (CloverTrail) has one extra core.

      • Craig

        Because heat, power and cost are real world factors. We can’t all live in a dreamy little armchair pundit world where a pissing match about speed is the only thing that matters.

        People can say what they want about Intel closing the gap on power consumption, but their unit prices are still a joke. Their days of high margins and huge R&D budgets are coming to a close.

    • NicholasMicallef

      What would be interesting is a comparison between current smartphone CPUs and old desktop CPUs, to figure out which year desktop CPUs had roughly the same performance. Then they should do it with last year’s Smartphone CPUs and so on for the last 3/4 years. Then they could extrapolate at which point in the future mobile CPUs will have the same performance as desktop CPUs of the same time. (that should never happen given that you should always be able to get better performance by using more power, while these mobile CPUs are more geared towards lower power consumption, but I wouldn’t be surprised if future desktop CPUs – assuming traditional desktops continue to exist – use similar, if not the same architecture, perhaps with more cores and higher frequency on the desktop side). Of course all the above would also require assuming that both industries improve at a constant rate, which doesn’t make much sense considering how slow desktop CPUs are improving (both due to the fact that more focus is being put on mobile and the fact that CPU performance matters very little in an average PC and still doesn’t matter much in high end PCs due to GPUs having more importance in that area) and how fast mobile technology is moving in the last 5 years. Sounds a bit scary to me since I still love to use a desktop CPU whenever possible for whatever I do.

    Popular

    Latest