Over the summer, AMD introduced its Naples platform which is the server-focused implementation of the Zen microarchitecture in a SoC (System On a Chip) package. The company showed off a prototype dual socket Naples system and bits of information leaked onto the Internet, but for the most part news has been quiet on this front (whereas there were quite a few leaks of Ryzen which is AMD's desktop implementation of Zen).
The wait seems to be finally over, and AMD appears ready to talk more about Naples which will reportedly launch in the second quarter of this year (Q2'17) with full availability of processors and motherboards from OEMs and channel partners (e.g. system integrators) happening in the second half of 2017. Per AMD, "Naples" processors are SoCs with 32 cores and 64 threads that support 8 memory channels and a (theoretical) maximum of 2TB DDR4-2667. (Using the 16GB DIMMs available today, Naples support 256GB of DDR4 per socket.) Further, the Naples SoC features 64 PCI-E 3.0 lanes. Rumors also indicated that the SoC included support for sixteen 10GbE interfaces, but AMD has yet to confirm this or the number of SATA/SAS ports offered. AMD did say that Naples has an optimized cache structure for HPC compute and "dedicated security hardware" though it did not go into specifics. (The security hardware may be similar to the ARM TrustZone technology it has used in the past.)
Naples will be offered in single and dual socket designs with dual socket systems offering up 64 cores, 128 threads, 32 DDR4 DIMMs (512 GB using 16 GB modules) on 16 total memory channels with 21.3 GB/s per channel bandwidth (170.7 GB/s per SoC), 128 PCI-E 3.0 lanes, and an AMD Infinity Fabric interconnect between the two processor sockets.
AMD claims that its Naples platform offers up to 45% more cores, 122% more memory bandwidth, and 60% more I/O than its competition. For its internal comparison, AMD chose the Intel Xeon E5-2699A V4 which is the processor with highest core count that is intended for dual socket systems (there are E7s with more cores but those are in 4P systems). The Intel Xeon E5-2699A V4 system is a 14nm 22 core (44 thread) processor clocked at 2.4 GHz base to 3.6 GHz turbo with 55MB cache. It supports four channels of DDR4-2400 for a maximum bandwidth of 76.8 GB/s (19.2 GB/s per channel) as well as 40 PCI-E 3.0 lanes. A dual socket system with two of those Xeons features 44 cores, 88 threads, and a theoretical maximum of 1.54 TB of ECC RAM.
AMD's reference platform with two 32 core Naples SoCs and 512 GB DDR4 2400 MHz was purportedly 2.5x faster at the seismic analysis workload than the dual Xeon E5-2699A V4 OEM system with 1866 MHz DDR4. Curiously, when AMD compared a Naples reference platform with 44 cores enabled and running 1866 MHz memory to a similarly configured Intel system the Naples platform was twice as fast. It seems that the increased number of memory channels and memory bandwidth are really helping the Naples platform pull ahead in this workload.
The company also intends Naples to power machine learning and AI projects with servers that feature Naples processors and Radeon Instinct graphics processors.
AMD further claims that its Naples platform is more balanced and suited to cloud computing and scientific and HPC workloads than the competition. Specifically, Forrest Norrod the Senior Vice president and General Manager of AMD's Enterprise, Embedded, and Semi-Custom Business Unit stated:
“’Naples’ represents a completely new approach to supporting the massive processing requirements of the modern datacenter. This groundbreaking system-on-chip delivers the unique high-performance features required to address highly virtualized environments, massive data sets and new, emerging workloads.”
There is no word on pricing yet, but it should be competitive with Intel's offerings (the E5-2699A V4 is $4,938). AMD will reportedly be talking data center strategy and its upcoming products during the Open Compute Summit later this week, so hopefully there will be more information released at those presentations.
(My opinions follow)
This is one area where AMD needs to come out strong with support from motherboard manufacturers, system integrators, OEM partners, and OS and software validation to succeed. Intel is not likely to take AMD encroaching on its lucrative server market share lightly, and AMD is going to have a long road ahead of it to regain the market share it once had in this area, but it does have a decent architecture on its hands to build off of with Zen and if it can secure partner support Intel is certainly going to have competition here that it has not had to face in a long time. Intel and AMD competing over the data center market is a good thing, and as both companies bring new technology to market it will trickle down into the consumer level hardware. Naples' success in the data center could mean a profitable AMD with R&D money to push Zen as far as it can – so hopefully they can pull it off.
What are your thoughts on the Naples SoC and AMD's push into the server market?
Also read:
So, 32 DDR4 slots that
So, 32 DDR4 slots that technically fully support 64GB per plaque. Hence the 2048GB of total possible memory per motherboard stack. I wonder what’s the final latency going to be if all of these will be filled out. I highly doubt that after all the ironing out it’s going to be slower than InFail’s so-called “offerings”, but nevertheless you really wouldn’t want to be a GAYMURR on any of these. For a home server, though…damn I just can’t wait for NAS solutions based on Naples. Twice as better performance than InHell’s and less power hungry, while way cheaper at the same time.
Intel is still more
Intel is still more efficient, isn’t it?
Sounds like it really depends
Sounds like it really depends on frequency. Zen and Naples seem super efficient at 3.2GHz and below…
It is not. ‘dem Xeons are
It is not. ‘dem Xeons are power hungry and hot as F, when it comes down to full utilization.
Arent the 3DS DDR4 DIMMs low
Arent the 3DS DDR4 DIMMs low latency than standard DDR4?
I do buy Intel for my own computers, but if these CPUs are even close to what this indicates, Intel will have to drop prices or actually start being innovative when it comes to server CPUs.
That being said, Skylake Purley has 6 channel(if im not mistaken and it works like current E7s it will be able to run in 12 channel mode for performance or mirrored 6 channel mode for mainframe reliability). It will also support XPoint DIMMs which will be very important to some users.
Like Skylake for the desktop was something like the new Sandy Bridge, Skylake Purley will supposedly be the new Sandy bridge for the datacenter.
HPC will be a different matter. People run all Xeon machines with thousands of CPUs and some supercomputer architectures based on x86 only are quite good.
However, i think that Fujitsu’s approach to large machines will probably destroy Intel, AMD and everyone else, mostly because they’ve been several years ahead for a while. They have had supercomputers with CPUs that use HMC and more advanced optical interconnects than anyone else in them since early 2015. With Softbank buying ARM, and Fujitsu licensing Nanteros CNT NRAM, as well as their experience with the 6 year old but STILL #1 Graph500 and #1 HPCG K computer, they are the ones to beat.
Intel does own Altera and Nervana, so they can definitely bring out a lot more processor diversity than they currently have. AMD will have to really get inventive with their APUs, SoCs and CPUs if they want to start taking serious market share from Intel.
Itll be a good thing for everyone if they do too, and supercomouters start using them again. After all, the Titan supercomputer used Nvidia GPUs with AMD Opterons as host processors.
How these chips handle the issues of data locality and pJ/bit of data movement and pJ/FLOP, which will come down to their architecture, should be interesting.
But the real question here
But the real question here is. Will it better the I7 7700k in gaming? because that’s what matters.
The gaming ONLY ManChild must
The gaming ONLY ManChild must have his draw call ego/super ego conflict stroked, but the real money is in the professional markets for AMD and others. Enjoy you costly 4 core ripoff but others will be actually computing with their Ryzen affordable 8 cores SKUs. And what is all this hype about overclocking headroom on the top end 7700 “K” SKUs, why does it have some much “Overclocking” Headroom to begin with! Intel should maybe be offering higher base clocks/boost clocks out of the BOX on the 7700 “K” if it has so much overclocking headroom in the first place, what’s up with that Intel?
I smell an intentional under binning product segmentation scheme on the 7700 “K” part from Intel, while the AMD Ryzen 1700 is the real overclocker’s dream at $329 and overclocked to 4 GHz to perform like the the 1800X for a great savings on for those golden 1700 samples that can make it stable at 4 GHz.
Top end/Top binned parts should by default have no overclocking headroom if the CPU’s maker is doing things on the up and up, with those golden lower binned parts the ones with the real overclocking headroom for those real overclockers to make great deals with!
Woosh!
Woosh!
Indeed
Indeed
Ky Kiske.
Ky Kiske.
The fact is that games pushes
The fact is that games pushes the limits of the cpu and gpu so they make excellent benchmarks for new microarchitectures. nVidia excelling in games translates to their successes in the enterprise market. Similarly for Intel Xeons too. I don’t see AMD doing any better in servers when they can’t even compete with Intel in modern desktop games.
Games do not push all the
Games do not push all the limits that matter in the professional markets otherwise the professional server/HPC websites would not need all those professional focused server/HPC benchmarks to compare professional systems from the various vendors. Again with the bog standard basement dwelling gaming Git’s analysis of the professional markets.
The server/HPC/Workstation markets do not measure things always in frames per second they used some Price/Performance metrics worked into some TCO(Total costs of ownership) metrics prepared by the professional actuaries. So more variables with the proper amount of statistical/mathematical sciences done by the PHDs/PHD consultants. And most certainly the power usage figures play an important role among many other standard and peer/professional reviewed(By actual Boffins with the sheep skins) testing that does not exist in the “Technical”/Gaming press.
AMD has its Radeon Pro WX GPU and Instinct GPU/AI SKUs to make up for any FP deficiencies on it’s Zen/Naples CPU SKUs relative to Intel’s Xeon SKUs for the HPC markets where FP workloads are needed. Also AMD’s Zen/Naples Core Count advantage is a core count advantage likely priced at a lower cost than any 24 core Intel server/HPC SKU so that initial cost savings gets added into the TCO and other ancillary savings like interest expense saved by having lower lifetime financing costs for that professional kit from AMD at its lower initial cost.
That real world with all its complex relationships is not noticed, and most likely avoided, down in the dark, dank, and smelly environment of the basement dwelling narcissists with a need for ego sublimation satiated by any FPS metrics for fantasy land.
Edit: 24 core
to: 22 core
Edit: 24 core
to: 22 core
hahahah whaat? Zen has done
hahahah whaat? Zen has done much better in productivity applications than it has in games so far and games are not a indicator of server/productivity work loads.
And this is all without proper windows support or drivers either for the CPU.
As was noted by the article AMD is going to HAVE to execute well for the server market. Half assed launches will not be tolerated there. That is there issue and it will interesting to see if they can pull it off.
I would be very surprised if AMD didn’t already know this however. So if they think they’ll be ready to launch servers in Q2 (so April-June) that must mean that they’re confident that Zen’s current issues will be worked out well before then.
Game workloads are nothing
Game workloads are nothing like supercomputer or servr workloads. They are, at best, a test of single core performance and latency.
some games with shitty menus coded in ScaleformUI(a form of Flash that should be destroyed) are good POWER VIRUSES to test GPU VRMs and cooling.
I see AMD actually releasing a really nice CPU that even uses Indium Tin solder, for 1/2 the price of Intels offerings.
First question: Is the 32
First question: Is the 32 core Zen/Naples variant made up 2 16 core dies on a module or is it one big monolithic 32 core Zen/Naples die? The next question will be partially based on the results of the first question and it is what 24/16 or even 12 core Zen/Server variants will also be available for some lower cost workstation options for some more affordable options with lower core count SKUs.
Also AMD is along with IBM(Creator of the CAPI coherent interconnect IP)/Others a founding member of OpenCAPI! So what support will there be for OpenCAPI on AMD’s server motherboard variants and associated motherboard chip-sets as well as any CPU integrated NB support for the various interconnect fabric Open Standards fabric IP on AMD’s server SKUs. OpenCAPI support on AMDs GPU/AI accelerator products is another question for any support for AMD’s GPU/AI GPU SKUs on any Power9/OpenPower based systems that any third party OpenPower Power9 licensees may use in their server facilities(Google/Others).
Concerning any AMD motherboard workstation SKUs will there be some single socket low end workstation MB options for those home/prosumers that may want more that 8 Zen cores for more demanding workloads.
Final statement for AMD Talk more about the AMD Infinity fabric and how it relates to the CCX units that both the Zen/Naples and Zen/Ryzen consumer variants are using. And when I say talk more I mean some Hot Chips symposium levels of white papers about that Infinity IP, the ones you should have said more about at the 2016 Hot Chips symposium. More whitepapers and please get those CPU optimization manuals, and Zen/Ryzen related optimization manuals, out there in larger numbers for the developer community.
The server/workstation/HPC market is what is really going to save AMD’s Bacon and not the consumer gaming market as much. I’m looking froward to some great Zen/Server 16 or 12 core variants with single socket/low cost PRO MB options in addition to the dual socket MB/server options. So hopefully there will be 24, 16, 12 core Zen/server CPU SKU options and an update from AMD on any HPC/Workstation/Server APUs on an Interposer APU variants that are in the development pipeline.
AMD is back!
P.S. Also on these first
P.S. Also on these first systems with the Zen/Naples CPU only SKUs can be paired with discrete GPUs via the PCIe lanes for some standard GPU Accelerator/AP workloads.
Here is a some videos of Zen/Naples CPU only benchmarks linked to at the Techreport for some geological/oil and other industry seismic analysis workloads of very large data-sets Zen/Naples versus “two-socket Broadwell-E server with the same pair of Xeon E5-2699A v4 CPUs”(1)
“AMD’s Naples platform prepares to take Zen into the datacenter”
http://techreport.com/news/31549/amd-naples-platform-prepares-to-take-zen-into-the-datacenter
Oopz! More mistakes on my
Oopz! More mistakes on my part with that GPU(direct attatched GPU) usage not necessarily needing any of the available remaining Zen/Naples PCIe lanes but are able to use the other 64 per Zen/Naples MCM lanes/fabric PCIe purposed lanes/”lost” for NVlink types of direct to GPU usage:
“64 of the PCIe lanes from each processor are lost, as the pins are used for inter-socket communication.”(1)
So those “lost” PCIe lanes are able to be used to:
“Alternatively, those same pins can be used for direct-attached GPUs (which is to say, not using PCIe). That’s comparable to what Nvidia is doing with its NVLink interconnect.”(1)
Here is the information quoted above in full context for the Ars Technica article(very Interesting):
“Naples is a two-socket server chip aimed squarely at Intel’s Broadwell-EP-based Xeon E5 V4 range, and the overall theme of AMD’s chip is “have more of everything.” Naples has 32 cores, capable of 64 simultaneous threads, eight memory channels, supporting up to 2TB RAM and 128 PCIe 3.0 lanes. Intel’s comparable offering? Twenty-two cores and 44 threads, four memory channels, and a maximum of only 1.5TB RAM.
The PCIe pins are multiplexed and can be used for things other than PCIe. In two-socket systems, 64 of the PCIe lanes from each processor are lost, as the pins are used for inter-socket communication. That leaves 64 from each socket available for I/O. The inter-socket communication uses AMD’s “Infinity Fabric,” the (somewhat ill-defined) high-speed cache coherent interconnect that’s also used within Zen.
Alternatively, those same pins can be used for direct-attached GPUs (which is to say, not using PCIe). That’s comparable to what Nvidia is doing with its NVLink interconnect. Later in the year, AMD is going to ship Radeon Instinct headless GPUs. These will be used for both supercomputing-type workloads as well as accelerated graphics in virtualized desktops. The company is promising that at least four Instinct cards can be used with each Naples processor. The same I/O channels will also support Ethernet and NVMe storage; Naples is, like Ryzen, a system-on-a-chip, and it supports up to 12 NVMe drives. It also supports Ethernet, though AMD hasn’t specified the number of Ethernet ports supported or the maximum supported link speed.”(1)
(1)
“AMD Naples server processor: More cores, bandwidth, memory than Intel”
https://arstechnica.com/information-technology/2017/03/amd-naples-server-processor-more-cores-bandwidth-memory-than-intel/
Ah interesting, thanks for
Ah interesting, thanks for the link! As to the vhip layout, I believe it is four 8c/16t dies on a single package along with the I/O and memory controllers.
People are suggesting that
People are suggesting that it’s the exact same die as Ryzen. Which would make sense given AMD’s limited resources, and means that Ryzen has hidden depths.
Yeah, I would not be
Yeah, I would not be surprised by that at all, it is likely the same or very similar dies but then they are pulling more of the I/O on package rather than an external chipset. I am guessing, since their semi custom business unit is involved, they are tweaking the design a bit. Not sure though.
Wasn't it the Opteron 6300 or something a few years ago that was essentially two desktop Piledriver CPUs on a single package? i vaguely remember writing something about that heh.
Yep, the 6200 and 6300
Yep, the 6200 and 6300 Opterons have two dies connected by HyperTransport. I think they’re identical to the dies in the desktop parts.
We have several of those servers still in operation at work, but they’re off at our colo facility so I’ve never actually seen them.
Yes Zen/Ryzen is 1/4 of a
Yes Zen/Ryzen is 1/4 of a Zen/Naples! So the Naples 4, 8 core dies joined up on an MCM and lashed together with an Infinity fabric and some other IP. What this allows for is some modular construction latitude on the MCM module for maybe even larger SKUs than the 32 cores top end current Zen/Naples variant. Those wafer yield issues for AMD will be much better with AMD sticking to smaller modular 2 CCX unit dies also, with any bad cores on a Dual CCX unit die able to be binned down for consumer market usage on any Ryzen 5 on even Ryzen 3 SKUs.
AMD appears to be using that MCM to a great advantage for Naples and any extra IP dies along with the 4 dual CCX unit dies where AMD has the ability to scale its server CPU only SKUs in increments of 8 cores(2 CCX units) for maybe some 16 and 24 core lower cost server/workstation variants with any defective dual CCX unit runts easily converted to useful lower binned consumer Ryzen 5 or 3 series parts.
Any new Interposer based server/HPC/workstation APUs will probably make use of these same dual CCX unit dies and a GPU die(Vega) along with the HBM2 stack/s. Future Interposer based APUs and discrete GPUs under Navi are probably going to see the GPU split up into chiplets(Modular dies) in a similar fashion to Ryzen and Zen/Naples modular dual CCX unit dies. So better wafer yields for any Navi modular GPU designs that can be scaled up in modular fashion with CPU chiplets and GPU(Navi) chiplets and HBM2 die stacks added to the interposer for AMD’s entire range of products laptop to HPC/Exascale SKUs.
It IS supposed to be exactly
It IS supposed to be exactly same structure, because Zen was developed as a very versatile puzzle-like platform to begin with. There just more CCX in the package, but structure-wise I don’t think Naples differs from “gen1” RyZen even by the one slightest bit. If it does, that would be strange, in all honesty.
Anandtech say it’s four
Anandtech say it’s four eight-core dies on a multi-chip module: http://www.anandtech.com/show/11183/amd-prepares-32-core-naples-cpus-for-1p-and-2p-servers-coming-in-q2
” “Naples” processors are
” “Naples” processors are SoCs”
No GPU on these first Server SKU variants from AMD so no SOC moniker, but the APU’s on an Interposer variants will be what an SOC is except the Interposer based APUs will be made from separately fabricated CPU Dies and separately fabricated GPU dies and HBM2 stack/s all wired up via the interposer’s silicon substrate as if they where made on a single monolithic silicon die.
take a look at this Exascale APU on an interposer PDF and see where that SOC(APU in AMD’s naming) IP be going from AMD (1). Note the HBM2 dies directly on the modular GPU chiplets and see the CPU chiplets also with the whole APU made in modular fashon atop several Active interposers joined up for a very powerful system.
“Abstract— The challenges to push computing to exaflop
levels are difficult given desired targets for memory capacity, memory bandwidth, power efficiency, reliability, and cost. This paper presents a vision for an architecture that can be used to construct exascale systems. We describe a conceptual Exascale Node Architecture (ENA), which is the computational building block for an exascale supercomputer. The ENA consists of an Exascale Heterogeneous Processor (EHP) coupled with an advanced memory system. The EHP provides a high-performance accelerated processing unit (CPU+GPU), in-package high-bandwidth 3D memory, and aggressive use of die-stacking and chiplet technologies to meet the requirements for exascale computing in a balanced manner. We present initial experimental analysis to demonstrate the promise of our approach, and we discuss remaining open research challenges for the community.” (1)
(1)
“Design and Analysis of an APU for Exascale Computing”
http://www.computermachines.org/joe/publications/pdfs/hpca2017_exascale_apu.pdf
Apparently SOC in the srever
Apparently SOC in the srever market does not mean the same thing as SOC means in the Consumer market so I’ll eat some crow pie on that. [Heads out with bag of corn kernels to attract crows]
Having the southbridge on the 4 core MCM module makes it an SOC in the server market(no GPU required), so a single Zen/Naples 32 core SKU is made up of 4, 8 core dies on an MCM module(1)/? with southbridge included on the MCM making it an SOC.
“When we first spilled the details of Naples last June, there was one thing still up in the air, does the 4-die MCM use a soutbridge or is it a stand-alone SoC. The answer is it is a stand-alone SoC and all the goodies come from the MCM package unlike the desktop variant. This fully qualifies as ‘neato’ in our book but it does have one major caveat as the below picture points out.”(1)
(1)
“AMD’s Naples put them back in the server game
http://semiaccurate.com/2017/03/07/amds-naples-put-back-server-game/
Right, it is a system on a
Right, it is a system on a chip in the sense that there does not need to be a separate southbridge/PCH that handles the I/O.
I do wonder if the
I do wonder if the heterogeneous approch to exascale will really work. Today, CPU only supercomputers still seem to be better in real world performance than heterogeneous ones, and are much more computationally efficient.
Not if you look at the
Not if you look at the whitepaper/research paper and GPUs with there massive FP performance, and a much lower clocked FP/T-Flop parallelism at that saves the most power in the performance/watt or G-Flops/T-Flops per watt metrics. Ditto for the HBM2 right on top of the GPU chiplets with HBM2’s high effective bandwidth at low relative clock rates via the HBM2’s very wide connection fabric. Power usage does not scale linearly with increasing clocks. So with the heterogeneous exascale APU design built across multiple interposers on a Multi-Interposer module that uses interposers of an Active interposer design, power can be saved via parallelism of both FP units and connection fabrics that all are clocked lower but provide massive effective FP performance and massive effective bandwidth all while being clocked much lower tan any CPU only solution.
So the interposers themselves are not a passive design Interposer and probably even have a complete VERY WIDE parallel on interposer coherency fabric/circuity etched into the interposer’s silicon to manage CPU chiplets and GPU chiplets. That HBM2/Newer HBM# sitting right on to of the GPU(Navi based) chiplets will have even lower latency being stacked in 3D on top of the GPU chiplets and not “2.5D” stacked next to the GPU chiplets! so lower latency directly through the DRAM TSVs into the GPU with the lowest latency pathways possible using very wide parallel connections.
No this design is ready made for exascale computing with all the extra wide connection fabrics supported in the active interposer themselves so CPU/GPU/HBM2-HBM-New-Version all running at a manageable clock rate to make that 20MW power exaflop design power usage metric for the government funded exascale computing R&D initiative. CPU only designs with their narrow bus non massively parallel computing/memory designs can not hope to be clocked low enough and still hope to meet any of the exascale computing R&D initiative’s target metrics for power usage or low thermal heat dissipation requirements or even the same cluster rack space density requirements of the government’s exascale computing initiative.
For the real world all the R&D funding from U-SAM will find its way into the consumer market that does want the most power savings possible with great performance metrics. So AMD’s APU on an interposer designs have great potential acroos all markets.
“Intel is leveraging its
“Intel is leveraging its commanding Xeon lead as a springboard to attack other lucrative segments, such as networking with Omni-Path/silicon photonics and memory with 3D XPoint, but it leverages locked-down proprietary interconnects that have raised the ire of the broader industry.
AMD, by contrast, leverages open protocols where it can and participates in developmental efforts on several new open interconnects, such as CCIX, Gen-Z, and OpenCAPI, so for the broader industry, a competitive AMD represents more than just a cost advantage and second source. Let’s dive in.”(1)
Now here is what I do not understand, why is there never any reference to Micron’s QuantX brand of 3D XPoint when depicting 3D XPoint and an intrinsic Intel market advantage in any server/consumer market competition. 3D XPoint is not the sole proprietary IP of Intel, and granted Intel does have an limited Optane/XPoint product to market first! But 3D XPoint is not an Intel advantage with Micron(XPoint’s co-creator) also going to be competing in the 3D XPoint market! So AMD’s Zen/Naples based server SKUs from the many OEM’s could very well make use of Micron’s QuantX in a DIMM based offering/other offering from Micron and that’s not really an Intel market advantage in the long run over AMD!
I do agree with what the Tom’s writer is saying for the most part but why this Intel/Optane only mindshare is not corrected with a refrence to the competing Micron/QuantX in the press! It’s almost a reflex to refrence Optane while forgeting that Micron’s QuantX(XPoint also) will on the market to compete with Intel’s 3d XPoint offerings.
(1)
“AMD Demo’s Naples Server SoC, Launches Q2 2017”
http://www.tomshardware.com/news/amd-zen-naples-soc-server,33819.html
Then it should really give
Then it should really give Skylake Purley a run for its money.
But will it play Crysis?
But will it play Crysis?
not as good as a 7700k if you
not as good as a 7700k if you insist on using a gtx 1080 to game at a res of 1080p
other than pro’s who want hundreds of fps and game at 1080p, I don’t think most reasonable people could give a fuck
this server chip is fucking awesome
no doubt it will trickle down to a super high end ryzen with more memory channels and more pci lanes for those who believe they need them
Good thing GPUs won’t get any
Good thing GPUs won’t get any faster and games won’t become more demanding in the future.
Not that is doesn’t already,
Not that is doesn’t already, but then the Ryzen will really shine because developers will code properly for multi-core cpus, which will become the new standard for game development and processors affordable by the masses, thanks to AMD.
This Naples is literally 4
This Naples is literally 4 Ryzen dies in a package.
Most computer users are not
Most computer users are not gamers so yes AMD offers the best all around solution while both Intel and Nvidia are compromising with general compute to placate the games market that’s not really that large relative to the all around good usage metrics compute market. It’s not like AMD’s CPU are that much behind in gaming to begin with and some tweaking to Ryzen is on the way to improve gaming performance.
So AMD’s CPU and GPU SKUs are tailored for more all around usage metrics that are of Value to the most users.
There will be a Crysis in
There will be a Crysis in Intel’s Boardroom and Krzanich may have to grasp at that golden ripcord and pull like mad to unfurl that golden parachute to a safe landing after unloading all of his Intel Stocks and bonds!
Those high Intel margins are going down like a uranium brick tossed out of a Biplane and Krzanich has no markets remaining to milk! So there will definitely be that form of Crysis playing out for Intel for the server market in that sense of the word!
Real High End gaming requires a real GPU and Intel does not have any GPU SKUs for that part of the consumer high end GPU market! Intel has always needed a little help with the graphics part of the high end gaming market!
You can get 64GB registered /
You can get 64GB registered / load-reduced DIMMs today (around $750 on Amazon). That gives them the 2TB capacity for a 2S server.
I’m more interested in the lower-end server chips, unless this comes in unexpectedly cheap. But it’s great to see serious competition again.
Yeah, I’ve already point that
Yeah, I’ve already point that out. They’ve gone with them 16GB modules as a reference simply to play it safe for now, but it obviously can handle 32GB and 64GB plaques just as fine. The latency with those, though…I shudder just thinking about what the the final latency will be after a typical top tier Naples motherboard would be filled with them 32GB/64GB modules to the very brim.
The Zen/Naples platform can
The Zen/Naples platform can support more of the more affordable 16GB DIMM SKUs for a lower cost and better latency solution on its Zen/Naples server SKUs with respect to the seismic benchmark demonstrated.
So if you are an oil field geology survey company the savings that comes from using the lower capacity DIMMs with the better latency performance on the Zen/Naples platform is very attractive. The Zen/Naples platform that can host more of the smaller capacity DIMMs at a higher effective total memory capacity at 16GB per DIMM, relative to Intel’s offering. With any end users of Zen/Naples leveraging that smaller/lower cost per DIMM/memory size and better latency metric also for certain seismic/similar workloads.
So for the prospective oil field geology survey company client AMD’s Zen/Naples platform offers more versatility for costs and workload tuning using the smaller lower latency/lower cost capacity DIMMs that can be clocked higher also relative to larger capacity DDR4 DIMMs.
It’s not about playing it safe it about price/performance and tuning servers for workloads using smaller capacity sized(16GB) DDR4 DIMMS(More Affordable) that have better latency and can be clocked higher to crinch those seismic data sets. Who would have thunk it! Businesses like to save money and tune their server hardware to specific workloads!
Interestingly enough, for
Interestingly enough, for some reason (it might be just a typo, but nevertheless) WCCF reports that Naples actually could do 4TB of memory, rather than 2TB reported here. I wonder…do they know something PcPer doesn’t, or are they simply preparing for 128GB plaques?
Not sure on why they said
Not sure on why they said 4TB. From the info zi got from AMD and in their launch video they state up to 2TB using 64GB DIMMS across the total 32 slots.
AMD was probably using the
AMD was probably using the lower capacity DIMMs in the video to stress the savings that could be had by any end user in using the Naples platform that could be populated with more DIMMs at lower cost for the lower capacity DIMMs for that specific benchmarking demo. The smaller capacity DIMMs that have the sweet spot for cost and latency timings for compute workloads that benefit from that configuration. So AMD’s Naples platform has more versatility in that respect than Intel’s competing platform.
Sure both AMD’s and Intel’s platforms can support more memory using larger capacity DIMM’s but at what price and at what higher cost to any memory latency/memory speed issues with the larger capacity DIMMs. AMD’s was demonstrating it’s Zen/Naples configure-ability advantages with respect to memory cost and memory latency/speed by having a larger individual DIMM number hosting capacity in its Zen/Naples platform for that seismic workload benchmark that would benefit most by using more smaller capacity DIMMs.
In that presentation they
In that presentation they were actually running on just 512GB across the board. Just slightly above Intel’s 384GB which were already a cap for those Xeons. Basically, AMD didn’t show Naple’s final form yet even by the slightest bit of it. My god.
You are using English words
You are using English words but you are not making your thoughts clear for any rational reasons. You have no demonstrated inductive or deductive reasoning skills and you do not appear to even have the ability to impart any cogent thoughts using those English words you so like to spew forth with.
Someone, ban this
Someone, ban this kindergartner-tier asshat troll already.
Yeah, probably a typo then,
Yeah, probably a typo then, indeed.
And 4TB with 128GB LRDIMMs.
And 4TB with 128GB LRDIMMs. https://www.skhynix.com/eng/popup/prdLRDIMM.jsp
They’re marketing it with
They’re marketing it with comparisons to Xeon E5, but this seems like a closer match to either Xeon E7 (which is now Xeon Gold/Platinum for some reason?) or, if performance per core drops to maintain TDP, Xeon D. Xeon D lacks all the connectivity stuff Naples has though.