First, Chipworks released a dieshot of the new Apple A8 SoC (stored at archive.org). It is based on the 20nm fabrication process from TSMC, which they allegedly bought the entire capacity for. From there, a bit of a debate arose regarding what each group of transistors represented. All sources claim that it is based around a dual-core CPU, but the GPU is a bit polarizing.
Image Credit: Chipworks via Ars Technica
Most sources, including Chipworks, Ars Technica, Anandtech, and so forth believe that it is a quad-core graphics processor from Imagination Technologies. Specifically, they expect that it is the GX6450 from the PowerVR Series 6XT. This is a narrow upgrade over the G6430 found in the Apple A7 processor, which is in line with the initial benchmarks that we saw (and not in line with the 50% GPU performance increase that Apple claims). For programmability, the GX6450 is equivalent to a DirectX 10-level feature set, unless it was extended by Apple, which I doubt.
Image Source: DailyTech
DailyTech has their own theory, suggesting that it is a GX6650 that is horizontally-aligned. From my observation, their "Cluster 2" and "Cluster 5" do not look identical at all to the other four, so I doubt their claims. I expect that they heard Apple's 50% claims, expected six GPU cores as the rumors originally indicated, and saw cores that were not there.
Which brings us back to the question of, "So what is the 50% increase in performance that Apple claims?" Unless they had a significant increase in clock rate, I still wonder if Apple is claiming that their increase in graphics performance will come from the Metal API even though it is not exclusive to new hardware.
But from everything we saw so far, it is just a handful of percent better.
Is this one being sued by
Is this one being sued by Nvidia too ?
From everything I saw so far,
From everything I saw so far, nobody knows yet, but they pretend to know in an attempt to look clever. And the most popular way of looking clever is to act skeptical, disappointed and blase.
Apple owns a bit on
Apple owns a bit on Imagination Tec. the makers of the PowerVR so expect Apple, to offer some assistance to its supplier/investment. As far as the A8, its the 4 GPU core/cluster verses the 6 GPU core/cluster debate. Then there is the question of the A8X tablet variant, that I hope and Pray has the PowerVR wizard(with hardware ray tracing circuitry). Apple needs a Pro Tablet that can run OSX, for the graphics applications that only OSX can run, and a 4 core A8, with some higher clock speeds and thermal envelope could probably handle OSX. If Apple was smart they could fund the PowerVR division of Imagination technologies, to create an exclusive discrete variant of the PowerVR wizard for its MacBook laptops, and have laptops doing professional graphics rendering, without the need for server class CPUs. ray tracing on GPU could become very profitable, knowing that expensive CPU power was not needed for fancy reflections within reflections, and other realistic rendering that can only be done with ray tracing.
Now without Anand, to review the A8, who has knowledge of any assembly language optimization manuals, to write the code necessary to stall the execution pipelines, or cause a miss-predict, and get at the pipeline depth of the A8’s true execution pipeline/other resources, among other coding things that can be done to flush out what is the A8’s under the hood resources, there are lot of coding forensics that can tell, Apple must spend loads of cash training in house the assembly language coders that hand tweak the most essential OS code, like context switching and such. Sure looking at the necked die shots can be of some assistance or help, but on the coding side, there are documented, and undocumented ways, of poking and prodding a CPU, and watching the signals that come down the wires, or even getting that de-lidded, but undamaged, CPU running and the infrared camera focused on the hot spots. Every CPU/SOC ever made comes with some form of diagnostic mode, in its hardware, like single stepping, and other undocumented OP codes, not listed in the regular assembly manuals, but still the undocumented stuff will filter out over time, or be found out by forensic methods, both coding and other.
Damn it, Apple why the NSA style secrecy around a CPU, Big Blue was never that secretive about their CPUs, of course they had a big enough market share at the time, and still do on mainframe/HPC systems, but they needed trained programmers, and lots of them, and secrecy does not lend itself to learning about CPUs/SOCs, or having a ready supply of trained personnel should market share suddenly grow by leaps and bounds.
I thought that, similar to
I thought that, similar to the recent maxwell and tonga gpu architectures, this new gpu in the iphone had a new texture compression mode/format called ASTC that increases fill rates. Could someone knowledgeable comment on this? (My initial thought is that it might only be accessible via the metal api, but I have A) done zero research, and B) unless there are compatibility reasons that would seem stupid. )
its mentioned in the a8 analysis article from anantech:
http://www.anandtech.com/show/8514/analyzing-apples-a8-soc-gx6650-more
and explained more thoroughly in another article from anandtech:
http://www.anandtech.com/show/6134/khronos-announces-opengl-es-30-opengl-43-astc-texture-compression-clu/4
Why is there anything to do
Why is there anything to do with crapple on pcper. This is where real computers are.
Eh, it’s interesting and at
Eh, it's interesting and at least it gives reasons why AMD and NVIDIA are still on 28nm (seriously, 10 million iPhones at TSMC's 20nm process could have been a lot of GPUs). Probably would have not posted this if I knew Ken was working on an article too, though.
Apple has not revolutionized
Apple has not revolutionized the world with its A8 SoC, but continues to trace the route home with a chip architecture whose performance and energy efficiency are proven. Especially since Apple has the opportunity to fully optimize the A8 8 and iOS together. For Android, it’s a bit more complicated because of the diversity of Android versions and SoC architectures.