Perf per Dollar, Pricing, Conclusions
Performance Per Dollar
In consumer processors, performance per dollar is consider the king of all KPIs (key performance indicators). For the workstation and prosumer market it has slightly reduced value as consumers in this space are often will to pay more for the additional performance relative to the lower priced product stack. Still, part of AMD’s story for Threadripper is offering more performance at the same price (1950X), or matching performance at a lower price (1920X). Did it live up to it?
For single threaded workloads, the value of Threadripper and Skylake-X just isn’t there. Audacity and the CBR15 1t result give the edge to Intel’s mainstream processors by a wide margin. With similar or better performance than any of the HEDT solutions, and prices at one-third the cost (or lower), that should be assumed.
Fully multi-threaded workloads are where the story gets more interesting. Cinebench, Blender, POV-Ray, and Handbrake all have noticeably improved value propositions for discerning buyers compared to the Core i9-7900X. With a shared price of $999, the 1950X and the 7900X essentially differ by the level of performance delta between them, ranging from 20% to 37% in our graphs above. The Threadripper 1920X offers better performance per dollar than the 1950X in all the multi-threaded results as well, though clearly the performance of the 1950X is going to be faster. (The close proximity of perf/dollar for the 1950X and 1920X showcases the linear pricing model that AMD has adopted.) Note that though the likes of the 7700K competes with the 1950X in these performance-per-dollar graphs, the performance advantage of the 1950X will offset the price delta for content creators.
Pricing and Availability
AMD Ryzen Threadripper processors have been available for preorder, along with X399 motherboards, since July 30th.
- AMD Ryzen Threadripper 1950X – $999 – Amazon.com
- AMD Ryzen Threadripper 1920X – $799 – Amazon.com
- ASUS Zenith Extreme X399 MB – $549 – Amazon.com
- Other X399 Motherboards – Amazon.com
Though it should go without saying, Threadripper isn’t cheap. Though it offers a great value compared to the current Intel Skylake-X solutions on the market like the $999 Core i9-7900X, $799 and $999 for a PC processor is a steep price to pay for any consumer, content creator or not. The HEDT market is a higher margin, higher priced segment, and one that AMD is happy to be entering for the first time. It does so by following the same principles as it did with Ryzen at the start: acknowledge the areas that are weakest and strongest, and double down on value in the areas of strength. The Threadripper 1950X and 1920X do exactly that, targeting the content creators with performance per dollar that Intel can’t match. At least not today.
Closing Thoughts
Hopefully you’ve been paying attention these last many pages, as the launch of the AMD Ryzen Threadripper processor and X399 platform is not a simple read-the-last-page kind of release. From a technology and architecture stand point, what might at first appear to be a simple design with two identical die on the same substrate, doubling cores, threads, cache, memory controllers, etc. turns out to be much more complex. The intricacies of the Zen memory controller and cache hierarchy, tied to the performance and capability of Infinity Fabric, mean that workloads we previously felt were completely understood and known quantities take on a new light. We saw that with the first Ryzen 7 launch and it remains the case today. AMD’s Zen design is a phenomenal CPU architecture and was able to revive the once down-trodden giant into relevance, but it does so with complexities that will require a long-term outlook to for software development to address.
The platform side of Threadripper, including both the 64 lanes of PCIe 3.0 from the processor and the X399 motherboards that partners are building for this new socket, is win for AMD. That many I/O lanes means that MB vendors and consumers have a lot of flexibility for building the system they need. Want as many PCIe lanes as possible for high GPU counts? It can be done. Do you want high speed networking along with PCIe attached storage for a specific bottleneck you have? Threadripper can enable it. Obviously, the boards and platforms need to be tailored for them, but the first rounds of motherboards we have seen details on from ASUS, MSI, Gigabyte, and others are a solid start. AMD will once again have the flagship motherboards for its platform, exceeding the capability (with much less confusion around PCIe division, etc.) of Intel’s X299.
Performance for Threadripper falls into two categories: lightly threaded and highly thread. Lightly threaded and single threaded workloads will run faster on the Core i9-7900X and even the mainstream Core i7 family of processors that feature higher IPC and higher clock rates, generally. Games still fall into this category, so even though many enthusiasts are drooling over what Threadripper will bring, peak gaming performance at lower resolutions isn’t it. If you are gaming at 4K, or even 2560×1440 for the most part, Threadripper is quite capable of running within 8% of the performance available on the 7900X or 7700K.
For prosumers that often utilize software that can take advantage of high thread counts, the 32-thread 1950X will likely over a sizeable performance advantage over the best Intel has to offer. Cinebench and POV-Ray, for example, run 37% and 30% (respectively) on the 1950X compared to the Core i9-7900X. Handbrake, Blender, and our H.264 encode tests show slightly lower, but still noticeable, performance advantages too. The 12-core Threadripper 1920X often matches the performance of the 7900X as well even with a $200 price advantage. If you can take advantage of high core counts in your daily workloads, be it for video, rendering, ray tracing, analytics, etc., you are going to find the AMD Ryzen Threadripper to be a fantastic product.
Should you buy it? In general, the answer is going to be “no” for anyone when asked if they need a thousand-dollar processor. Even when Intel has the market to itself with the Extreme Editions that repeatedly found their way to store shelves at $999, we always knew it was for the most extreme of enthusiasts and the content creators that could justify the price to performance ratios. Threadripper falls into that same category, but it offers an improved enough outlook in on performance per dollar for highly threaded workloads that I see it stretching down to other consumers as well. Anyone itching to speed some coin to support AMD’s return to flagship status will be impressed by what the purchase. I am eager to get my hands on the Threadripper 1900X later this month – a $549 8-core offering that will have familiar workload performance but allows for the same connectivity support that the higher priced CPUs.
If you content creation is your livelihood or your passion, Ryzen Threadripper is targeted directly to you and is provides a competitive solution that AMD has been unable to offer in over a decade. Threadripper puts AMD back in the driver seat, offering the highest performance, highest core-count CPU for the high-end market today.
The only question that remains is how Intel’s remaining Skylake-X platform might change the story this summer and fall. We know that prices will be higher, but are the recently announced clock speeds enough to jump performance up and replace the Core i9 family as the king of the hill?
AMD Ryzen Threadripper Processor
I’m very curious on how will
I’m very curious on how will the two dies and memory modes affect virtualization? I’ve only experimented with VM in the past but is it possible to run two Hexa-cores windows VM and with each individual memory nodes assigned to each VM?
Are you setting the Blender
Are you setting the Blender tile sizes to 256 or 16/32?
Just wondering since an overclocked 5960x gets 1 minute 30 seconds on the BMW at 16×16 tile size. Significant difference that shouldn’t just be a result of the OC.
For reference: 256 or 512 are for GPU and 16 or 32 are for CPU – at least for getting the best and generally more comparable results to what we get over at BlenderArtists.
When reading is not enough,
When reading is not enough, the mistakes are OVER 9000!
“If you content creation is your livelihood or your passion, ”
” as consumers in this space are often will to pay more”
” Anyone itching to speed some coin”
” flagship status will be impressed by what the purchase.”
” but allows for the same connectivity support that the higher priced CPUs.”
“”””Editor””””
Now just point me to the
Now just point me to the pages… 😉
Nice to see a review with
Nice to see a review with more than a bunch of games tested. Keep up the good work!
Should not a test like 7-zip
Should not a test like 7-zip use 32 threads as max since that is what is presented to the OS??
now it only uses 50% of the threads on TR but 80% on i9-7900x.
Silly performance, looking
Silly performance, looking forward to the 1900X and maybe 1900.
I sometimes wonder why nobody
I sometimes wonder why nobody ever points out that within CCX (4 cores that can allow a lot of games to run comfortably) ZEN has latencies of half those of Intel CPUs. Binding a game to those 4 cores (8 threads like any i7) has significant impact on performance. It does not change memory latencies of course but core to core is much better.
I’m glad someone else noticed
I’m glad someone else noticed this besides myself. I noted this during the Ryzen launch & quickly noted that by using CPU affinity along w CPU priority to force my games to run exclusively within 1 CCX & take advantage of using high CPU processing time on these same CPU cores I could take advantage of this up to a point.
What all this shows to me is that the OS & game developers software need to be revised to better handle this architecture at the core logic level instead of usersAMD having to provideuse methods to try to do this that cannot be used in a more dynamic fashion. I’ve ran some testing on Win 10’s Game Mode & discovered that MS is actually trying to use CPU affinity to dynamically set running game threads to be run on the fastestlowest latency CPU cores to “optimize” game output thru the CPU but it still tends to cross the CPU CCX’s at times if left on it’s own.
What I’ve found is by doing this my games run much smoother w a lot less variance which gives the “feel” of games running faster (actual FPS is the same) due to lower input lag & much better GPU frametime variance graph lines w very few spikes….essentially a fairly flat GPU frametime variance line which is what you want to achieve performance-wise.
Just to note….my box is running an AMD Ryzen 7 1800X CPUSapphire R9 Fury X graphics card w no OC’s applied to either the CPU or GPU.
It’s a step in the right direction but needs more refinement at the OS level……
As expected, performance per
As expected, performance per dollar is crap in single threaded tasks, which most workloads are. Games don’t even use more than 1 or 2 cores.
Yea games only use 2 cores
Yea games only use 2 cores lol
http://i.imgur.com/Hg3Ev5p.png
And “as expected”, we have
And “as expected”, we have yet another Intel shill complaining about gaming performance on a production CPU, which isn’t made for gaming (although it’s not bad in the least and has a longer future as devs code for more than Intel’s tiny core count (under $1000))..
-“performance per dollar is crap in single threaded workloads”…
Well, since these aren’t sold as a single or dual core CPU, performance per dollar as a unit is beyond everything on Intel’s menu.
– “Games don’t even use more than 1 or 2 cores”
Well, I’ve been using a FX-8350 for 2 years now, and I always see all 8 cores loaded up on every single game I play (and I have many). Windows 10 makes use of these cores even when it’s not coded in programs. It would work even better if devs started coding for at least 8 cores, and I believe they will start doing this in earnest now that 8-core CPUs are now considered average core counts (unless you’re with Intel).
You would have been better off stating that core vs core is in Intel’s favor on the 4-core chips and some others, but ironically the “performance per dollar”, as you mention is superior with AMD.. in every way.
What memory are you using,
What memory are you using, and could you name a 64GB kit that works in XMP? And why 3200Mhz over 3600?
Intel is still superior both
Intel is still superior both in raw performance and in perf/$. If you were being objective you wouldn’t have given slapped an editor’s choice on this inferior product.
In Handbrake the 1800x is 40%
In Handbrake the 1800x is 40% slower than the 1950x and in reverse the 1950x is 67% faster than 1800x.
Open cinebench with a TR or
Open cinebench with a TR or even an 1800x. Show me any Intel chip that can come within 20% of the 1950x. The entire Ryzen 7 lineup is king of the “perf/$” category. 1800x = $365 on eBay right now. Look how close it matches with Intel products that are double the price or worse.
If you want to compare single core perf vs Intel, you can win an argument.. at the cost of very high power draw and even worse cash draw. Perf/$ is a dead argument for any Intel fanboy. Find something else. BTW, are you also commenting under “Thatman007” or something? Sound like the same Intel mouthpiece.
Sorry for necroposting, but
Sorry for necroposting, but it really belongs here:
The recent Meltdown vulnerability and its performance implications on Intel CPUs pretty much leveled the playground now. After reading the article and all the comments above I opted for a very good B350 motherboard and a Ryzen 1800X to replace my Core i7 5930K (Haswell). Reason is that my CPU will likely be hit very badly performance wise by the upcoming Windows 10 security update. Intel should pay back 30% to all affected CPU owners, actually…
Reason is that likely I would not gain anything from NUMA, except of the additional complications. So I opt for the easier to manager (lower) power consumption and less noise from cooling as a result.
Thank you for collecting all the great info.