Ashes of the Singularity: Escalation (DX12)
Though Ashes of the Singularity is a very heavily threaded game, it still has dependency on single threaded IPC and of course, core to core communications. The Ryzen 3 processors do well enough here, matching performance of both the Pentium G4560 and the Core i3-7100.
Civilization VI (DX12)
In Civ6 we see the Ryzen 3 1200 match performance with the Pentium G4560 but the 1300X is able come out ahead of the Core i3-7100 and approach the 7350K.
Deus Ex: Mankind Divided (DX12)
Even at 1080p, the performance between these platforms is identical, showing very little CPU to CPU variation.
Far Cry Primal (DX11)
Far Cry Primal has been shown to see sizeable gaps in performance on previous Ryzen hardware in relation to the Core i5/i7 family but in this case, the Ryzen 3 1300X/1200 are right on par with the Core i3-7100 CPU as well as the more expensive Ryzen 5 parts.
Grand Theft Auto: V (DX11)
GTA V will definitely take advantage of added cores and the Ryzen 3 takes its first strong performance win over the Core i3 with a 12% advantage for the 1300X over the Core i3-7100.
Hitman scores do put the Ryzen 3 at the bottom of the group, more or less, but the variance between it and the Core i3-7100 is small. The 7350K on the other hand is able to show a 15% performance advantage over the AMD 1300X.
Rise of the Tomb Raider (DX12)
Scores for the Ryzen 3 line up with the performance of the Core i3 hardware, within 6% of the 7100. (Ignore our Ryzen 5 results here – they are from the pre-patched run of the game. Updating soon!)
Ghost Recon: Wildlands (DX11)
Wildlands shows a clear division between the Ryzen 3 and Core i3-7100/Pentium G4560 and the 7350K and above. It's interesting to see the gap from the 7100 to the 7350K as it indicates the added clock speed is improving things for Intel's platform. However, the Ryzen 5 does much better too, so it's likely that core count is also a benefit for game engine.
Compared to the gaming results in the initial Ryzen 7 processor release, and even those in the Ryzen 5 launch, the Ryzen 3 situation appears to be much more…sedate. No, it doesn't win in the benchmarks compared to the Core i3 very often, or by very much when it does, but it also doesn't have the large gaps in results between the AMD and Intel hardware left us scratching our heads and doubting our testing methodology. It looks like the Core i3-7100 and the 7350K would make better CPUs for a purely gaming budget build, but integrating the Ryzen 3 into that same build would result in an equally high performance experience with possible benefits in other application workloads.
So the new 1300 and 1200
So the new 1300 and 1200 parts are still harvested 2-CCX chips with the extra issue of not supporting high RAM frequencies. Wow… I would expect nothing less from AMD; a truly useless product! It’s well known that cross-CCX latency is the main culprit behind Ryzen’s poor performance in gaming and data-intensive applications. The 1300 losses to Intel’s dual core 7300 in games and most other benchmarks.
You mean the i3 7350K, which
You mean the i3 7350K, which not only is pricer but also needs a cooler and a Z170 board if you want to overclock. The R3 CPUs on the other hand can hit 3.9/4.0 GHz with the stock cooler….
i mean sure its not like they
i mean sure its not like they couldnt do it right? Because what do they know about microarchitectures and stuff like these? You tell them big engineer
the 7300 doesn’t show up in
the 7300 doesn’t show up in this review the 7350k does which is a 150 dollar part which requires a z270 board to overclock compared to the 130 for the 1300x and 110 for the 1200 which can overclock with b350 boards that cost substantially less. The 7350k is about 10 dollars less than an R5 1400 which is a 4 core 8 threaded part. You have to remember to take the cost into account and for the price those extra true cores could make quite the difference in how a fully loaded machine with background processes and multitasking will work for you. I really hope that PC reviewers start taking into account the cost of the supporting motherboard with the reviews. I understand not having the time to do the OC tests though. Several other review sites have those figure and you’ll see them running near the 7350k for that much lower system price point.
read other reviews, the
read other reviews, the “latency” of CCX for Ryzen 3 is next to negligible compared to Ryzen 5 or Ryzen 7, the performance they have is VERY comparable to what they are targeting performance/price vs Intel 7k line of Core i3, overclocked the Ryzen 1300x is very much directly comparable to core i5 which are far more “oomph” and more expensive, basically, less cores to abuse the CCX/Infinity Fabric, the less “bottleneck” number 1, number 2 AMD has done much optimizing through microcode update since Ryzen 7 came out and the “latency” is vastly reduced from where it started off at months ago, so again, read other reviews..
What is 7300 is it not dual core with SMT i.e is native dual core but hyperthreaded, all the reviews I seen thus far said quite simply, 1300x is very much a match if not surpassing and worse case scenario with games/apps that are very reliant on high clock speed or Intel Bias titles, the comparable “at stock” speed Intel chip is ~2-5% faster, add in the overclock to 1300x which them “cheap” 7k series are unable to do, the tables turn to AMD favor by having more raw clock speed.
Your $ your choice, but it pays to do the reading over many places to best form legit opinions, or you are IMO just a putz ^.^
Intel’s new mesh
Intel’s new mesh interconnect/fabric also makes some latency trade-offs in order to get that scalability for more cores going forward. So there will be some increased latency on any of the newest Intel SKUs also. AMD can also at the 7nm process node engineer 6 cores to a CCX instead of 4 cores and maybe save on the need to take any extra latency inducing hops outside of the CCX unit’s bounds or modular(Zeppelin/newer) die’s bounds. It’s always going to require some sorts of latency tradeoff unless both Intel and AMD go towards even more complicated mesh topologies.
Also the more cores the more opportunities for latency hiding of compute workloads. Just look at AMD’s and Nvidia’s GPU’s and both GPU makers are well versed in engineering latency hiding in software/and hardware over thousands of shader processing cores.
Maybe some larger L3 caches will be employed also along with some 3D stacked HBM2/HBM-newer memory with even wider than 1024 bit buses to each HBM stack that placed is closer to the processor/processor die complexes to cut down on latency. Even making use of larger capacity 3D stacked/diffused L3/L2 caches with TSV connections directly into the processor’s cores.
The larger and more numerous the CPU core counts become the more GPU like the CPU’s logical design/interconnects will have to become. Intel still has that Larrabee project IP and AMD has that GPU IP to create newer CPU designs that have 32 and above CPU core counts per MCM/Die/Interposer whatever module designs. Silicon based Interposers, the active not the passive designs, are starting to be discussed more with entire complex interconnects that are able to connect any types of CPU/GPU/other processor dies together with active silicon interposers that have the logic and the traces that make up the whole complex coherent interconnect fabric etched onto the silicon interposer. So whole complete computing systems dies can be attached to the active silicon interposer based interconnection fabrics for all manner of processing needs.
Why are their not Overclock
Why are their not Overclock test for this review?
No overclock review, which
No overclock review, which will be the main focal point for budget gamers.
Its a ryzen, which means it
Its a ryzen, which means it will OC up to ~4 GHz and hit a brick wall both on air and liquid.
Then? Seems to be a decent
Then? Seems to be a decent boost from 3.4/3.7 Ghz and can be done on the stock cooler in some cases, as Tom Hardware and Hardware Unboxed managed to do so.
We just ran out of time
We just ran out of time before this release due to other things we were doing. We are going to update with our results on that soon!
Awesome, thanks! I guess
Awesome, thanks! I guess there is no need for my previous comment then XD.
I agree, I am kind of
I agree, I am kind of disappointed, I expected overclocked results as well because that is what I will be buying this for. toms-hardware and hardware unboxed/techreport included OC results and they make the CPUs look MUCH more impressive!
I wish you would make that
I wish you would make that glorious frametimes and percentile graphs similar to your GPU tests. It would be very interesting to see how frametimes goes between AMD 4C/4T and Intel 2C/4T, especially in games such as GTA5.
We should probably do that.
We should probably do that. But MOST of the time, in our CPU testing results, commenters usually want things more simple.
in before #Ryzen_SEGV_Battle
and here’s the processor clockspeed breakdown that’s missing from this article.
│ Ryzen 3 1300x │ Ryzen 3 1200
Base │ 3.5GHz │ 3.1GHz
All │ 3.6GHz │ 3.1GHz
2-Core │ 3.7GHz │ 3.4GHz
XFR │ 3.9GHz │ 3.45GHz
Hey Ryan, i think the RAM
Hey Ryan, i think the RAM frequency specs for Ryzen are not correct. Ryzen supports 2666MHz with Single Rank Modules, not 2400MHz. 2400MHz is only supported when Dual Rank comes in to play.
I believe you are correct,
I believe you are correct, but that hasn't changed in the Ryzen family, correct?
I’m Really interested in
I’m Really interested in building a AMD based low cost APU PC using an AM4 motherboard! And now that Bristol Ridge APU SKUs are up for non OEM purchase, can PCPer benchmark any BR/AM4 low cost builds with a look towards the future when there will be plenty of available Zen/Vega APUs to update these BR/AM4 APU systems to Zen/Vega APU systems?
“AMD Releases Bristol Ridge to Retail: AM4 Gets APUs”
Well, no. But again, Ryzen
Well, no. But again, Ryzen allows for (two) 2666 Single Rank! Modules while your Specs comparison table mentions only 2400MHz. Since most of the cheaper RAM modules (afaik) are Single Rank rather than Dual Rank (or depending on the capacity (8+GB sticks are more likely to be DR than 4GB), it would be fair to mention that, or explicitly state that 2400MHz are supported for (two) Dual Rank Modules.
Now on to more of the
Now on to more of the Zen/Vega Ryzen APUs based systems. And there is already some new news on that front. But I’m really waiting on a 35-45 watt 4 core/8 thread, with the highest clocked/highest nCU count integrated GPU, Zen/Vega top end laptop APU SKU in a NON-[thin and light] laptop form factor.
I hope that there will be some high wattage mobile Zen/Vega Ryzen APU SKUs for those that want more of a workhorse laptop that does not throttle prematurely for lack of a proper laptop cooling solution. With none of that single channel to DDR4 memory nonsense.
I do not think that there will be any HBM2 enabled APUs this time around for any first generation Zen/Vega APUs. But maybe when the latter half of 2018 gets here there may be some HBM2 enabled APUs from AMD, and some mobile Workstation Grade Zen(more than 4 cores) APUs also built on an Interposer with a much fatter Vega, or Navi, based GPU die and dual HBM2 stacks.
I liked that this review had
I liked that this review had some of the ‘higher end’ processors – Ryzen 5 6-cores and i5-7600K for comparison. Not equally priced, but good for reference on what the next $60-100 added to your system can bring..
I think AMD has done a
I think AMD has done a wonderful job with Ryzen, not limiting the end user from performance available which seems to be ~4.1-4.@Ghz clock limit at most.
Would have been nice to see them maybe “tweak” the less core/thread chips a touch better to cull the die of what is not needed, likely this would have incured a slightly larger cost, but likely would have also resulted in less power need to feed the unused portions which still apparently while deactivated for example on Ryzen 3 is the exact same die on Ryzen 7, explains why power is so similar.
Now as far as TDP. FFS, TDP is NOT nor ever has been POWER IN WATTS, that is cooling required to keep the chip be it cpu, graphics, dram etc properly cooled to maintain the makers design spec. so if it says 65w TDP if you followed AMD or Intel or Nvidia or any other makers “test” system example 20c air ambient air with a 65w capable cooler load, it would maintain that cpu, gpu or whatever at proper temperatures ~65-80% “duty cycle” indefinitely for the expected life of the product.
If you are going to review products, that is fine we always need as many reviews being done as possible to see the products in question to better decide if it should or should not be purchased for our needs, but seriously, if you do not know what a rating is, then you should not use it as a “its only rated at 65w TDP so why then is it using above this amount?” type thing.
All that being said, AMD more often than not has always been very “realistic” with their numbers verging on making them higher rather than lower, Intel on average usually underestimates almost like a perfect test bed type thing, and Nvidia is Nvidia, they rely on fancy VREG and such to maintain power use more often than not if they say “180w” when loaded in the real world they tend to go over the rating, that is in my opinion and my experience.
Anyways, to sum it up once again TDP is NOT WATTAGE, it is cooling required.
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate in typical operation. Rather than specifying CPU’s real power dissipation, TDP serves as the nominal value for designing CPU cooling systems.
The TDP is typically not the largest amount of heat the CPU could ever generate (peak power), such as by running a power virus, but rather the maximum amount of heat that it would generate when running “real applications.” This ensures the computer will be able to handle essentially all applications without exceeding its thermal envelope, or requiring a cooling system for the maximum theoretical power (which would cost more but in favor of extra headroom for processing power).
Some sources state that the peak power for a microprocessor is usually 1.5 times the TDP rating. However, the TDP is a conventional figure while its measurement methodology has been the subject of controversy. In particular, until around 2006 AMD used to report the maximum power draw of its processors as TDP, but Intel changed this practice with the introduction of its Conroe family of processors.
A similar but more recent controversy has involved the power TDP measurements of some Ivy Bridge Y-series processors, with which Intel has introduced a new metric called scenario design power (SDP)
AMD implementation is on average more accurate to rate ACTUAL power consumed.
TDP is a “guessing game” more oft than not if you are using a reference to how much power the cpu/gpu will use, hotter it gets, may mean more power even though TDP of product technically has not changed and conversely colder it gets may mean less or more as well TDP not changing, there is to my knowledge NOTHING that will tell you directly actual power consumed on the fly besides very expensive equipment.
Am done, put a different way, Ryzen spire cooler, designed for 65w TDP (cooling power) and it does exactly what it should be doing, keeps the cpu in “spec” for cooling requirements, and the Ryzen line in general are very advanced cpu that have all kinds of fancy sensors built into them, so they can vary the voltages, clock rates speed on cache etc to maintain power usage therefore heat output very well.
Love the extensive
Love the extensive Performance/dollar graphs!
Different review from
Different review from ANANDTECH
Pentium G4560 – Grandmother
Pentium G4560 – Grandmother pc
Intel i3 7100 – Light office pc
AMD r3 1200 (OC@3900) – Entry level enthusiast & cheap gaming pc
AMD r5 1600 (OC) – Mainstream pc
Intel i7 7700k – Top gaming pc
AMD r7 1700 (OC) – Sane top productivity pc & high gaming pc & entry server
AMD threadripper || Intel Xeons – app adjusted or high end servers
“Pentium G4560 – Grandmother
“Pentium G4560 – Grandmother pc” ?
No. It’s a budget enthusiast part. No one is gona buy one of these for a grandma, there are plenty of embedded A8/10 or Atom cpu/board combo’s for far less money. The G4560 is for those who want to save every possible penny to spend more on GPU. And it’s fine for a dedicated gaming rig.
Atom for “rich” web browsing
Atom for “rich” web browsing … old people are generally slow, but some of them are also impatient 🙂 I think Pentium is not so expensive, it’s family after all, you can be more generous 🙂
Two my last home desktops were build by pentiums, I’m definitely aware of their potential. But real 4 core CPU for 20-30$ more has besides everything mentioned in reviews also unmeasurable but substantial effect on responsiveness, so I think it should become entry level for enthusiasts.
P. S. Maybe I forgot media pc, also suitable for Pentium or intel i3 CPUs.