First teased at Computex earlier this summer, AMD has now released details and availability information for their 2nd Generation Threadripper CPUs.
Based upon the same 12nm Zen+ architecture we saw with the Pinnacle Ridge CPUs like the R7 2700X, Threadripper will now be split into two product families, the X, and the WX series.
The X-series is mostly a refresh of the Threaripper SKUs that we saw last year, with 12 and 16-core variants. The Threadripper 2920X and 2950X will retain the same two die, 4 CCX arrangement that we saw with the previous generation, with the ability to run in either unified or non-unified memory modes.
Notably, the 8-core variant found in the original Threadripper lineup seems to be absent in the 2nd generation.
This new generation of Threadripper comes in less expensive than the last, with a $50 price drop on the 12-core CPU, and a $100 price drop on the 16-core variant.
The newest aspect of the 2nd Generation Threadripper Lineup is the addition of the "WX" series processors. These higher core count processors are being marketed by AMD more towards "Creators and Innovators" rather than gamers.
Available in both 24 and 32-core variants, the Threadripper WX series represents the highest core count consumer CPUs ever launched. Since we know that Zen+ dies contain a maximum of 8 cores, we can assume that these processors are using a 4 die configuration, similar to the EPYC server parts, but likely with the same 64 lanes of PCIe and 4 channel memory controllers
This pricing is extremely aggressive compared to the highest core count competitor from Intel, the $2000 18-core i9-7980XE.
All 2nd Generation Threadripper CPUs will include the 2nd Generation Zen features that we saw in the R7 2600 and 2700 series, including XFR 2.0, StoreMI, and improved memory support and latency.
Additionally, these new Threadripper CPUs will use the existing X399 chipset, with UEFI updates being made available for existing X399 boards, as well as some new variants such as the MSI MEG X399 Creation launching alongside the new CPUs.
Availability of these processors is staggered, with the 32-core WX CPU shipping first on August 13th (and available now for preorder on Newegg and Amazon), followed shortly by the 16-core 2950X. However, we won't see the 12 and 24 variants until October.
Stay tuned for our review of these parts as they reach retail availability!
These are definitely for
These are definitely for workstation users, overkill for majority of mainstream users.
YOU SHUT YOUR DIRTY MOUTH!!!
YOU SHUT YOUR DIRTY MOUTH!!! I MUST MEASURE MY SELF WORTH AND STROKE MY EPEEN BY MY HUGE CORE COUNT, RAM COUNT, and TITAN xP TO PLAY FORTNITE AND FEEL SUPERIOR AS 5 YEAR OLDS DESTROY ME AND MAKE FUN OF MY MOM!
It doesn’t take an extreme
It doesn’t take an extreme use case to show a benefit to having more cores. Some basic video encoding of game captures for sharing with one’s friends would be an example.
A person can also just enjoy neat technology. It doesn’t have to come down to a display of ego.
I don’t like these new price points, but I can’t fault anyone for buying one.
What’s new about these price
What’s new about these price points? Other than that they’re cheaper than Epyc and the Intel competition.
Anyone know of any good
Anyone know of any good resource on how running a workload across multiple dies and memory banks affects performance on the first gen Threadripper and Epyc?
It’s a no brainer that any
It’s a no brainer that any memory intensive workloads are not going to do as well as on a 32 core/64 thread processor with only 4 memory channels compared to 8 memory channels. But for gaming workloads that’s not going to matter and even Blender 3D rendering workloads do not suffer as much for the lack of memory channels beyond only 4.
It’s always going to be better having the most memory channels but what matters for gaming is latency, more so on 1080P gaming and below where the GPU is not the bottleneck. If TR2 has 4 full dies with only 2 dies having active memory channels then the games makers can set the game’s die affenity to only use the dies that have local memory channels access. On TR2/32 Core that means there will be 2 dies with 16 core/32 threads that will have local on die memory channels(2 per die for at least 2 dies on every 4 die TR2 SKU). So for gaming that’s not as much of an issue and the games makers are already somewhat accustomed to TR1’s die arrangement so if AMD has made TR2 with the same 2 die arrangement getting the active memory channels on the same 2 dies as the previous TR generation then the other 2 dies that do not have their own memory channels can be ignored mostly for gaming workloads and only be there when the user needs more processing power than 16 core can provide.
Servethehome has already done some memory population testing on Epyc where the memory channels have only been populated with DIMMs on 4 out of Epyc/Socket-SP3-MB’s 8 memory channels in order to guage 4 channel memory performance. Der8auer the extreme overclocker/modder has a dual core Epyc server platform that he has taking his specilized modded-chip to and getting around some of the clock speed restriction on the Locked down Epyc platform. So Keep up with Der8auer’s Youtube and Serverthehome for more follow up articles and as always Buildzoid’s and GN’s power/overclocking testing. Der8auer’s definitely got EEE/ME engineering chops so his products are interesting as well as his YouTube channel.
CPUs are going to run mostly from their cache levels if they are designed properly and TR2 has loads of cache L1/L2 and L3(64MB on the 32 and 24 core TR2 SKUs). Most of the issuers with gaming performance have to do with Optimizations that the games/gaming makers are not doing properly, and a lot of Nvidia’s gaming performance comes from Nvidia’s financing the proper amount of gaming optimization taylored for Nvidia’s GPU hardware something that AMD now has the extra revenues to afford to spend more resources doing.
AMD is already heavily investing in better software/driver assitance for its CPUs/GPUs customers. Also Zen2 appears to also be coming with even larger cache options so that’s going to take some of the bite out of Latency issues on the Infinity fabric. But with all those CPU cores on TR/TR2 it’s going to be easy for the optimizations to continue to increase performance, gaming or otherwise.
AMD could fully decouple the Infinity Fabitc’s clock domain from the Memory clock domain but that will require extra clock domain synchronization circuitry that may affect latency. AMD’s GPU Infinity Fabric is not tied to the VRAM Memory clock domain, but GPUs are not as latency sensitive as CPU’s what with all the latency hiding that can be accomplished on all those GPU cores and Large L1/L2 and even L3(Not used as much on GPUs) GPU cashes help there also. It’s only the main CPU control thread on and its attendent draw call threads that need to be worried about Latency anyways so that can be done mostly on a single die, or even handled by a single CCX unit with 4 core and 8 threads on up to 8 core 16 threads with any other CPU threads available if needed at the cost of a little extra latency that may be easy to hide with all those CPU cores that are available on TR/TR2.
Vega’s HBCC/HBC can turn HBM2 into a GPU Last level cache on any product that comes with HBM2/GDDR# memory but that’s only of use of there is a slower/lower bandwidth tier of mamory on the system to make use of or if the GPU is discrete interfaced to the system via PCIe.
AMD needs to start thinking about Powerful APUs with MCM/Interposer based GPUs that also have access to HBM2 and even eDRAM last level caches so it becomes feasable to fully hide any latency issues that arise from using slower system DRAM. GPU’s really like having larger L2 caches and Vega’s HBCC/HBC(If Provided) is a direct client of the L2 cache on Vega GPUs along with the Render Back End(ROPs) on Vega directly connected to Vega’s L2 cache.
Edit:
but that’s only of use
Edit:
but that’s only of use of there is a slower/lower bandwidth tier of mamory
To:
but that’s only of use if there is a slower/lower bandwidth tier of memory
mamory, P. Griffen chuckles!
Wikichip’s Fuse has an
Wikichip’s Fuse has an antrcle on TR2 that they quote as having a “Compute Die”(AMD’s term) and an I/O die(complete Zeppelin die/2 memory channels):
“For the high core count chips, AMD introduced two new concepts: a compute die and an I/O die. A compute die is a Zeppelin without local I/O support. That is, a “compute die” has two CCXs and neither local DRAM access nor PCIe lanes. An I/O die is a full Zeppelin with the two CCXs, two memory channels, and 32 PCIe lanes. It’s worth noting that in practice, two dies are identical with the I/O subsystem just fused off but the semantics are important here.” (1)
They also have a nice graph baed on Price/Core and Price/Thread based on introductry MSRP pricing for both TR and the New TR2 SKUs. It’s a very detailed article with new information.
(1)
“AMD Announces Threadripper 2, Chiplets Aid Core Scaling”
https://fuse.wikichip.org/news/1569/amd-announces-threadripper-2-chiplets-aid-core-scaling/
Videocardz has more of that
Videocardz has more of that NDAed slide deck leaked the cat’s out of the bag a little more!
More Slides from KG’s video at Videocardz:
“In the most recent “Leo Says” episode from Kitguru, you will find photos from AMD Threadripper press briefing. Some of those slides were not made public, well until now. They were not part of the yesterday’s press deck.
The slides cover Ryzen Master profiles for Threadripper 2000, UMA vs NUMA modes comparison. They even reveal the pricing of Wraith Ripper.
You may want to watch the video first before skipping to screenshots we made for your viewing pleasure.” (1)
(1)
“Upon checking Threadripper 2000 announcement videos, we found an interesting piece from KitGuru.
AMD Ryzen Threadripper 2000: UMA vs NUMA, Wraith Ripper, Ryzen Master Profiles”
https://videocardz.com/newz/more-slides-from-ryzen-threadripper-2000-press-briefing-emerge
Anyone Pick up the fact that
Anyone Pick up the fact that they can recycle dies with defective memory and pcie controllers? I would assume this to be about another 5-10% die reuse that might otherwise be dead.
And that is genius
And that is genius