Last month Apple announced that it had extended its multi-year, multi-use license agreement with Imagination Technologies that gives Apple access to Imagination’s range of current and future PowerVR GPUs. Without a doubt Apple is Imagination’s biggest mobile customer, however Apple isn’t its only customer. Other companies that license the PowerVR GPUs include Samsung, MediaTek, AllWinner, Texas Instruments and Intel. This means that not only is the PowerVR GPU found in the iPhone and iPad it is also the GPU used in some variants of the Samsung Galaxy S4, the first models of the Amazon Kindle Fire and a large portion of Chinese phones with MediaTek processors.
The problem with technology is that it is complicated. It is the job of marketing departments to tell consumers why product A is better than product B. For CPUs the marketers did a great job of convincing the general public that more megahertz is good, more cores are good. Quad-core is better than dual-core, 1.2Ghz is better than 1Ghz, and so on. The problem is that this isn’t actually always true, but that is a different discussion. The question is how do you measure GPU performance? Cores? Megahertz? GigaFLOPS?
How do you measure GPU performance? Cores? Megahertz? GigaFLOPS?
What exactly is a “core” in a GPU is a debatable term. Previously a GPU core was the front-end processor which was responsible for scheduling and dispatching work. Or it could be used to describe whole instances of a GPU. Imagination’s SGX544MP3 GPU used three complete instances of the SGX544 engine, duplicating all the GPU resources three times. It was therefore called the MP3 as it had a three core GPU.
However GPUs also contain units that can perform certain mathematical and graphical functions. Most GPUs have lots of these units and GPU vendors started calling them cores even though they can’t act independently and need a front-end processor to schedule work for them. If you use this definition of a core then the latest GPUs from Imagination have 192 cores!
In the mobile space there are no less than seven GPU manufacturers. If you thought picking a video card for a PC was hard then you ain’t seen nothing yet! The current list of GPU designers include ARM with its Mali range, Qualcomm with its Adreno range, NVIDIA, Intel, Broadcom, Vivante and Imagination.
If we assume in that in general Qualcomm, NVIDIA and Intel only put their GPUs in their own processors that leaves ARM, Imagination, Broadcom and Vivante who can provide processor makers like Samsung, MediaTek and AllWinner with GPU designs. From that list the leading two are ARM and Imagination. Both Samsung and MediaTek have used ARM’s Mali GPUs and Imagination’s PowerVR GPUs and there doesn’t seem to be a hard rule about which GPU will be used in any particular chip. However as a rough generalization Samsung seems to have favored ARM’s offerings and MediaTek has opted more often for the PowerVR GPU. Apple on the other hand has chosen the PowerVR consistently. Apple’s A4, A5, A5X, A6, A6X, and A7 system-on-a-chip processors all use PowerVR GPUs.
The latest GPUs from Imagination have 192 cores!
PowerVR GPUs are different than other GPUs in that they use a Tile Based Deferred Renderer (TBDR), rather than an Immediate Mode Renderer (IMR) as found on other GPU architectures. On IMR systems the GPU is given details about every object in a scene and the objects are rendered and put together to form the frame. The problem is that objects which are behind other objects are rendered and then discarded as they can’t be seen from the current viewpoint. This means that any given pixel can be drawn more than once, which obviously is a waste of GPU time and of battery power.
With TBDR the frame is sliced up into small tiles and the tile isn’t rendered until the entire scene has been submitted to the GPU. That means that before rendering starts the GPU has an idea about what objects can be seen and which can’t. Using a process called “Hidden Surface Removal” (HSR) the GPU can save time and power by only rendering the bits which can be seen. Although other GPUs do have early rejection technologies, none of them are as good as TBDR.
Although other GPUs do have early rejection technologies, none of them are as good as Imagination's Tile Based Deferred Renderer.
Imagination recently announced its latest GPU, the PowerVR GX6650, the successor to the PowerVR G6430 found in the Apple A7 processor. On paper the GX6650 offers up to 115.2 GFLOPS at 300MHz, a significant jump from the 76.8 GLOPS of the G6430. These numbers match those of NVIDIA’s new Tegra K1 Kepler GPU but the new Imagination GPU has a higher Pixels/Clock (ROPs) rate, which could be a key factor when it comes to pushing out higher resolution content, such as 4K video. Whenever Apple does its next big GPU upgrade it is possible that it will move to the GX6650.
As for Android devices, we will have to wait and see if Samsung, MediaTek and AllWinner produce any processors with the PowerVR GX6650. What we do know is that the Allwinner UltraOcta A80, a big.LITTLE octa-core processor, will use the PowerVR G6230 GPU. The G6230 offers 38.4 GFLOPS at 300MHz and 4 ROPs, but that clearly doesn’t match the power of the GX6650. We also know that the recently announced MediaTek MT6595, another octa-core processor with four ARM Cortex-A17 cores plus four Cortex-A7 cores, will use an Imagination Technologies PowerVR Series6 GPU, but we don’t know which one.
What do you think? Would you like to see more Imagination PowerVR GPUs used in processors found in Android devices?