A new report of leaked benchmarks paints a very interesting picture of the upcoming AMD Carrizo mobile APU.
Image credit: SiSoftware
Announced as strictly mobile parts, Carrizo is based on the next generation Excavator core and features what AMD is calling one of their biggest ever jumps in efficiency. Now alleged leaked benchmarks are showing significant performance gains as well, with numbers that should elevate the IGP dominance of AMD's APUs.
Image credit: WCCFtech
WCCFtech explains the performance shown in this SiSoft Sandra leak in their post:
"The A10 7850K scores around 270 Mpix/s while Intel’s HD5200 Iris Pro scores a more modest 200 Mpix/s. Carriso scores here over 600 Mpix/s which suggests that Carrizo is more than twice as fast as Kaveri and three times faster than Iris Pro. To put this into perspective this is what an R7 265 graphics card scores, a card that offers the same graphics performance inside the Playstation 4."
While the idea of desktop APUs with greatly improved graphics and higher efficency is tantalizing, AMD has made it clear that these will be mobile-only parts at launch. When asked by Anandtech, AMD had this to say about the possibility of a desktop variant:
“With regards to your specific question, we expect Carrizo will be seen in BGA form factor desktops designs from our OEM partners. The Carrizo project was focused on thermally constrained form factors, which is where you'll see the big differences in performance and other experiences that consumers value.”
The new mobile APU will be manufactured with the same 28nm process as Kaveri, with power consumption up to 35W for the Carrizo down to a maximum of 15W for the ultra-mobile Carrizo-L parts.
Sounds good!
Sounds good!
Well lets hope that we will
Well lets hope that we will see those APUs in some devices, its sad that we didn’t see a lot APU Notebooks yet.
So , can we actually get APU
So , can we actually get APU based Mini PCs now
the last AMD Brix was Richland, come on. having a full blown Carrizo NUC type machine would be dope as fuck.
it’s a shame its not comign to the desktop though, i was going to upgrade my Phenom II X6 machine to an A8 Carrizo chip, but i might just wait for the A10 7850k to drop a bit and just buid one of those instead
finally AMD making some
finally AMD making some waves.
And this is still with DDR3
And this is still with DDR3 ^^
Imagine 2016 generation with DDR4 usage !
I have a tough time accepting
I have a tough time accepting these benchmark leaks. They are pointless if OEMs refuse to put these chips in affordable laptops that feature decent hardware. These are form factors I can accept as well as convince people to pick up for their personal use.
For Carrizo parts only
14″ to 17.3″ form-factors
1080p display
1″ thickness (please don’t give us thin, anemic pieces of crap)
Large (removable) batteries (45WHr at least)
Backlit keyboards
No touch screens (ready for Windows 10)
No discrete GPU (AMD dGPU switching is utter filth)
~$600 sounds about right for such a laptop.
For Carrizo L Parts only
11.6″ to 13.3″ form-factors
1080p display
1″ thickness (please don’t give us thin, anemic pieces of crap)
Large batteries (40WHr at least)
(Optional) Backlit keyboards
No touch screens (ready for Windows 10)
No discrete GPU
~$450 to $600 will work for me.
AMD does not need to have its chips in “premium laptops”. I would rather have them known for being decent quality serviceable laptops that don’t break if you handle them wrong. I like having access to the hardware and wouldn’t mind changing out the RAM and HDD. The large volume should allow OEMs to give the buyer the freedom which has become non-existent on the Intel side in the last few years. A disgusting trend in my opinion. And my biggest hope is that they are able to bin the high performance parts in sufficient quantities.
I also would love to see the Carrizo L in NUC type devices (as mentioned in an earlier post). Should definitely be cheaper than the Intel stuff. A device with a significantly larger footprint to accommodate a larger, quieter fan and provide easier access to hardware would be very welcome.
Oh well, a person can dream.
I would also like to add that
I would also like to add that the NUC type device could also hold a Carrizo part too. It should be pretty easy too considering the availability of the quad core Intel i7 in the Brix.
Carrizo/Carrizo L Chips
6″ x 6″ x 2″ form factor Aluminum chassis
Dual SO-DIMM slots
HDMI and Display Ports
2 USB 2.0 and 2 USB 3.0 Ports (more if possible)
3.5mm Audio jacks
M.2/mSATA SSD slot
Gigabit Ethernet
You copied the wrong chart
You copied the wrong chart from Wccftech.
Now. There are probably over 10 years from the last time I considered Sisoft benchmarks as anything of a value. Probably Carrizo does something good in the OpenCL benchmark, or the benchmark doesn’t run correctly and produces a false score. Or maybe uses HSA on Carrizo but not on Kaveri. But this is just wild guesses. I could understand a three times score over Kaveri if the stream processors are much better and we also have color compression and a little extra boost from Excavator cores. But there is no way to think Carrizo as something that can match the performance of a 265.
Well the performance and
Well the performance and memory efficiency improvements of gcn 1.2 that carrizo is rumoured to have helps with the limited bandwidth kaveri suffered with as those chips received direct benefits from faster ram (obviously due to having full gcn based gpu cores that were designed to work with 5000+Mhz instead of 1600-2400Mhz). the efficiency improvement would do a great deal solve that issue and the extra performance from architectural improvements should be useful too.
The chart is from the
The chart is from the SiSoftware page WCCFtech referenced, not the post. I see it doesn’t have the GPU data, I’ll have to fix it.
That’s mislading because
That’s mislading because OpenCL is not used in majority of games. Look at the video shader compute score instead which uses Direct3D.
Maybe not on windows based
Maybe not on windows based gaming engines, but for Linux things are different, hopefully AMD will get Mantle for Linux soon. There are lot of other software that uses OpenCL to accelerate computations such as Libre Office, gimp, Blender, photo shop, etc. AMD’s graphics has more SP, and other execution resources than Intel’s overpriced graphics, and better driver support.
How can you say Intel
How can you say Intel graphics are overpriced when they come free with the processor itself? Unless you are talking about the rarely seen Iris Pro with eDRAM (as found on Core i7 4770R) which is the fastest IGP known for single socket CPU+GPU(and also outperforming Kaveri easily).
Despite the OpenCL performance, rendering graphics (used in Windows gaming) is a different tale altogether. Always graphics rendering is limited to available memory bandwidth (since it is sharing it with the CPU) which is why APUs need high performance memory for better performance. That’s why I’ve pointed at that video shader compute benchmark (click on it to see its description)
And AMD’s better graphics
And AMD’s better graphics comes with their APUs, and AMD’s whole SOC is much more affordable. Intel’s CPUs/SOCs are overpriced. The Carrizo APU beats Iris “Pro” graphics, and nothing can replace having as many SPs, that can do things in parallel, as apposed to Intel’s Video shader/whatever skimping, Intel has to make up of the lack of SP/tessellation units, which lack the wider parallel abilities that come with having more SPs/Tess. units! Try turning a high polygon mesh model around in edit mode, with 600,000 polygons, and 1,200,000 vertices, and watch things bog down on Intel’s graphics, AMD’s higher SP/Tessellation unit count helps, as well as unified memory addressing, saving a lot of moving of large blocks of data between CPU and GPU memory(Intel does not have unified memory addressing).
Intel’s GPUs may be tuned more for gaming, but gaming is based on getting the lowest total polygon scene count possible, and making up for the lack of definition in total polygons, by using textures, etc. For graphics uses, nothing beats having high polygon meshes, for ray tracing, which means the polygon count would choke most real-time gaming engines, graphics editing, and rendering high quality images, require hours per frame, and millions of polygons, Polygon counts that choke all of Intel’s graphics “pro” products.
AMD’s, and Nvidia’s GPU products both can be used for graphics and gaming, while Intel skimped on the execution units and can only handle gaming workloads. I do more with my computers than gaming, so I’ll continue to look for AMD, or Nvidia when buying PCs/Laptops.
Intel’s graphics are tuned to get barely adequate gaming performance, and are not suitable for any high polygon 3d graphics workflows, and Intel’s drivers have too many problems, OpenGL bugs, and such, and the Open Source graphics programs use OpenGL, and OpenCL, etc.
I’m looking forward to AMD’s future HSA aware APUs, and being able to do more, especially being able to utilize AMD’s integrated and Discrete Mobile GPUs simultaneously for graphics workloads, hopefully there will be laptops that have AMD’s APUs paired with a AMD mobile discrete GPU. AMD showed some interesting driver/software at SIGGRAPH, able to accelerate ray tracing workloads on the GPU, while also using the CPU(Normally Ray tracing is done on the CPU), so having a more HSA aware APU helps even for software accelerated on the GPU workloads(GPGPU), and hopefully dedicated ray tracing hardware will come to the GPU market(The PowerVR Wizard has hardware ray tracing).
And all of your gaming will
And all of your gaming will still be limited by the memory bandwidth available to the GPU on that APU. That’s the main bottelneck for its graphics engine. No matter how many vertices it can render quickly (even millions per second), the amount that can be rendered and written to slower dynamic RAM is limited by available memory bandwidth (which is shared with the CPU thus much less bandwidth than what dedicated GPUs can get).
Furthermore Carrizo is a mobile platform thus will be using much slower mobile LPDDR3 (those SODIMM thingies) unlike on the desktop that has high performance DDR3. Considering its limitations and given its somewhat high TDP for mobile, you can expect around an FX-7600 like performance for the highest model but Mullins grade performance on the lowest model.
Just look at the PS4 and see how super wide its memory bus plus its using GDDR5 (all for increased memory bandwidth), just like discrete GPUs. That PS4 isn’t exactly APU class GPU inside but rather discrete gaming GPU class (around Radeon HD7870 and higher). That’s still faster than the GPU inside the Kaveri APUs.
Additionally look at the hardware inside the world fastest supercomputers. No APUs there. Instead they have high performance discrete GPGPUs such as Tesla and Xeon Phi, both of them outperforms memory bandwidth limited APU’s internal GPUs handily when it comes to co-processing duties, CUDA, OpenMP and OpenCL.
Intel shared it integrated GPU via the common L3 cache which can be much faster, thus no need for that brand new cache snooping technology in AMD latest APUs. And what about UMA? Nothing new actually, that’s been around for ages. If annyone has used Lucid Hydra engine then that’s UMA in action.
High quality ray tracing still exclusive to CPU based rendering, because of its algorithms (which many functions cannot be easily converted to parallelization). Even NVIDIA has already demonstrated some (limited) ray tracing on their GPUs. Did you know that?
This is why render farms are still around. For example, Disney’s Hyperion which uses 55,000 cores (incidentally an all Intel powered machine, reference: http://www.pcmag.com/article2/0,2817,2471740,00.asp )
There are render farms with
There are render farms with Power8 cores and Nvidia GPUs accelerating the whole process, and chances are the Xeon server farms are only using the CPUs to do the ray tracing part of the graphics, but the power8 server farms have the full graphics ability of Nvidia’s Pro GPUs. A power8 processor has 12 cores, and 8 dynamically variable processor threads per core. That’s 96 threads of ray tracing power per core, Dassault Systemes is using power8s and Nvidia GPUs. I can’t wait for the 3rd party licensed Power8 workstations to begin to arrive, Tyan is offering Power8, as will others. Just go to the professional server websites, and see how far the power8 processor is ahead of Xeon, in server workloads, Its going to be a great workstation market, when all those third party licensed power8 based products begin flooding the market. The power 8 cores can execute 10+ IPC, and keep those 8 processor threads speeding along. AMD’s and Nvidia’s GPU have the SP/other unit counts to not bog down any 3d mesh editing modes, on Blender, or other 3d software, Intel is so tuned for gaming(to Make up for lack of resources) that it is not good for mesh editing, I need to be able to smoothly rotate my High Polygon mesh models and scenes, and not have to wait 10 or 20 seconds to the interface to respond to Intel’s inadequate Tessellation/SP/Whatever resources! high polygon mesh modeling needs the most parallel SP/Tess./other resources, that Intel is unable to provide with its GPUs.
It’s only a matter of time, now that the PowerVR folks have the wizard processor(with Ray Tracing hardware on the GPU) before the entire graphics industry embraces Ray Tracing on the GPU, and then having a server SKU with lots of expensive CPU cores, will no longer be prerequisite for ray tracing workloads. AMD is Utilizing its HSA aware systems to at least accelerate ray tracing on the GPU, but when the ray tracing hardware meets the massive parallelism of the GPU, in hardware ray tracing circuitry baked into the GPU, even laptops will be able to do, what used to require expensive workstation/server CPUs.
DizzNee does not impress me! The power8 does, you can read the white paper/presentation of the Power8 at the Hot Chips symposium, I believe it was 2013, but you can Google it, and the market will not have to be paying the IBM prices for the licensed third party power8s, there will be plenty of competition starting this very year. Hell for Ray Tracing workloads, even a chip with loads of ARM cores will do, as long as the are Plenty of SIMD resources, I look forward to the ARM server SKUs too.
The world’s #1 fastest
The world’s #1 fastest supercomputer Tianhe2 is powered by all Intel parts with the Xeon coupled with Xeon Phi as the co-procesor (instead of Nvidia Tesla). Its nearly 2x faster than the #2 fastest supercomputer Titan powered by AMD Opterons coupled with Nvidia Tesla as the co-processor. This is followed in #3 position by Sequoia powered by IBM Power CPUs. That goes to show the strength of each configuration, thus so much for your Power CPUs worship. Nowadays all the new supercomputers especially those from the famous Cray uses Intel Xeons. Its not hard to see why Disney chose Intel Xeons for their Hyperion.
Ray tracing requires specialized functions (and programming) which is why normal GPUs are having a hard time at it (crude attempts at ray tracing mostly). I also know about Imagination’s (owners of PowerVR) new prototype GPU IP that is designed specifically for real time ray tracing.
Also most enthusiasts buying a powerful Intel CPU (such as the i7 4790K) will usually be pairing with discrete graphic cards (from Nvidia or AMD) rather than using its internal integrated GPU. This is why the APU has no place in mid to high end gaming PCs.
Oh yes, the Iris Pro (both
Oh yes, the Iris Pro (both Core i7 4770R and Core i7 4950HQ) reviews…
http://hexus.net/tech/reviews/cpu/67021-intel-core-i7-4770r-22nm-haswell/?page=8
http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested
And that includes image quality as well.
Image quality is a function
Image quality is a function of textures, and other circuitry in gaming, and in gaming the math libraries’ sample settings are not dialed up very highly, as they are in Graphics, in high quality graphics images, with the ray tracing algorithms set to high, and rays of light cast upon the High resolution 3d models, where even the fine Filigree is on the mesh, is not simulated with UV mapped images/textures, Intel’s product does not make the grade. High resolution mesh models with the actual vertices, and polygons there, and present in the model, to allow the mathematically simulated rays to bounce off and reflect/refract, shadow, be subsurface scatted, and AO, and a whole host of other things that only light can do better, not the quick and dirty mathematical approximations that games use, to get by at 30+ FPS. Intel’s GPUs can not move high polygon Mesh models around to be edited, without the whole interface bogging down like it was in a tank of molasses, and making workflow that should take minutes, take hours, really moving the mouse to rotate a model and waiting 5 or 10 seconds for the UI to update, and by that time the mouse has moved again, and the UI lurches, and jerks with the Intel graphics, while with AMD, or Nvidia, everything is fluid and smooth, and the interface responds in milliseconds.
You are arguing gaming, while I am arguing suitability for gaming and high polygon graphics editing, and Intel may be barely suitable for gaming, not high powered gaming, but AMD’s and Nvidia’s GPUs are suitable for both.
You’ll scream Iris Pro, but most of Intel’s SKUs do not get their so called “Pro” graphics, and it comes in their most costly SKUs, never have so many paid so much for such mediocre graphics, as they have paid for Intel’s graphics, no one games, high end, on Intel’s graphics let’s Leave those high end settings on, and remove the AMD, or Nvidia graphics from the equation, and see how Intel graphics games.
Integrated graphics have came
Integrated graphics have came a long way but they are still not that adequate yet to displace discrete GPUs. And APUs on the gaming level is not that great either. Most of the time using only the integrated GPU inside the APU, the game settings and details often had to be turned down to get playable frame rates (especially on 1080p or higher screens). This is why mid to high end gaming PCs mostly do not use APUs, instead use a proper CPU and a real discrete graphic card. Anyone buying a powerful Intel processor (such as i7 4790K) will usually not be using its integrated graphics and use instead a discrete GPU. Lucid Hydra is an interesting engine, as it can use a discrete Nvidia GPU render together with Intel’s integrated GPU and output using the integrated GPU video port. Such capabilites meant its crossing barriers (including memory access) and able to combine different GPUs together.
That Iris Pro is a limited SKU because they are only suitable for some configurations such as tiny PC boxes (such as GigaByte Brix Pro) where discrete GPUs cannot be used (which would end up overheating due to the cramped limited space inside the casing). Since Iris Pro uses eDRAM, it will also costs more thus limited to (high end) mobile ultrabook and ultra small form factor platforms (on order). Can still find them, for an example inside a 15 inch MacBook Pro (year 2014).
Can i get a 15-17″, 1080p,
Can i get a 15-17″, 1080p, IPS, 75hz, freesync notebook with that top Carrizo?
… with NO touchscreen, NO dedicated gpu, NO HDD, just one 2.5″ SSD
PELASE?!
Almost seems like half of the
Almost seems like half of the internets is waiting for said notebook, me included.
+1!! I’m waiting for said
+1!! I’m waiting for said laptop too!
I’d add:
– no DVD bs
– no USB 2.0 bs
– no VGA/HDMI/DVI just 2xDP (adapters are cheap and ubiquitous)
– no glare screen
– 6 cell battery
– 2x USB 3.1c with the new reversible standard
– SSD should be >240GBs and SATA3!
I know it’s only one
I know it’s only one “posbile” leak, but are they implying that this will be an apu with the gpu power of about the power of a r7-265? So there will be people out there WITHOUT a discreet gpu that will have better graphics than I got? That’s cool, but it kinda makes me sad.
Imagine what must feel people
Imagine what must feel people with dedicated dual gpu cards which are slower than even your gpu…
they must be suicidal.
What i meant to say is your logic is disgusting.
uhhhhhhh, k? Perhaps I was
uhhhhhhh, k? Perhaps I was too simple with my words and so was miss-understood.
It’s very cool if this is true, and a trend. If the sub 120$ discreet gpu market can be replaced by APU graphics, then it’s inevitable that those levels of graphics will become the accepted bare minimum, and those of us with entry level cards will be left behind, or forced to upgrade some fairly new parts as they quickly become outdated. It’s nothing new but GPU/CPUs have sorta slowed down in progress of late, and love it or hate it, it has been very kind to the pocket.
I wasn’t implying that I get depressed when people have better stuff than I do. If that were true I woulda killed myself decades ago.
Everything gets outdated very
Everything gets outdated very fast still.
While the performance has slowed down. But instead we are getting all kinds of quality of life improvements and features, new APIs and most importantly variable refresh rate – its as important for gaming as the first colored TVs for movies and shows.
Buying stuff to keep up is a very poor way of looking at things either way.
For example. I couldnt give a single fuck about a new video card before Half Life 3 or DOOM4 is out. I just cant care about incredibly bad and demanding games like crysis3.
And i would absolutely LOVE if they made a processor with integrated graphics that make quad titans obsolete, even if i owned a setup like that.
I dont know if i was able to get my point across, but whatever. Its late.
Excelent little things for
Excelent little things for budget rigs for customers.
hey guys, amd carrizo looks
hey guys, amd carrizo looks good but why are they not using sams and GFs 14nm and 20nm tech. Which is ready for production. Why is that any OEMs not using amd apus and gpus for producing a decent gaming laptop. This laptop oems always overcharge price of laptops. The reality is that even mobile socs like nvidia tegra k1,tegra x1,snap 810 and a8x defeat the entry level vcards like gt 820 with this card laptop costs 50-60000.
Samsung 14nm is still
Samsung 14nm is still considered paper announcement while GF is waiting for Samsung. Intel is the only company that is shipping real 14nm products. There are no commercial 20nm from either foundries yet. True gaming laptops usually feature a good CPU and discrete mobile GPU from either Nvidia or AMD, instead of the limited APU. The gaming performance of an APU is still highly inadequate. Mobile GPUs found on Snapdragon and Apple A8x SOCs are of different technology than desktop GPUs. These are designed around OpenGL for Android rather than DirectX (for Windows). These mobile GPUs are also designed specifically for ultra low power consumption, uses very different method for rendering. Furthermore the new Tegra X1 is not suitable for mobile devices like phones and tablets as it has rather high power consumption of 10W.
I’d like to see a carrizo
I’d like to see a carrizo laptop in an ultrathin form factor..
It’s said to perform at it’s best in a 15W TDP..
Something like Samsung 9 series, Asus Zenbook size notebook, with at least 512 GB SSD, 16 GB RAM (MAX supported frequency)..
Previous generation FX 7500 was not so power efficient.. This one should be good.. I hope..
For a proper laptop I think I’ll wait for AMD Zen..
Then we’ll see something like Macbook pro with an AMD processor in it..