Benchmark Testing
Synthetic Benchmark Testing
SiSoft Sandra 2017 SP2
SiSoftware's Sandra benchmark is an industry-standard suite for measuring various aspects of a systems performance. We use the CPU and memory-subsystem tests to validate how well those subsystems perform related to similar classed boards. This test was repeated three times with the highest repeatable scores recorded from each benchmark.
The Sandra benchmarks remain a fast and easy way to determine system quality from a CPU and memory subsystem perspective. The X470 Gaming 7 WIFI motherboard performed well, matching performance with the other similarly classed AMD X370 and Intel Z370-based systems. This CPU and memory performance parity indicates proper CPU and memory subsystem operation.
Multimedia and System Benchmark Testing
Handbrake v1.0.7
Handbrake was used to convert an uncompressed version of the Iron Man Blu-ray movie in MKV format to a compressed 1080P30 MP4 format. The Iron Man MKV file was ripped from the Blu-ray disc in the past with the file size for the uncompressed media coming in at 26 GB. Handbrake was run with the Fast 1080p30 preset settings enforced with the exception of Anamorphic set to Loose. This test was repeated three times with the lowest repeatable conversion time recorded.
Video encoding is one of the more system intensive operations, making it a good test to measure system performance quirks under elevated usage scenarios. The X470 Gaming 7 WIFI board dominated the enthusiast-based board, only being outclassed by the workstation-class Intel X299-based system.
Maxon Cinebench R15
Maxon’s Cinebench R15 benchmark can be used determine a system's ability to render 3D content based on their Cinema 4D animation software. The CPU benchmark test was run three times, with the highest reproducible Cinebench points score recorded.
The X470 Gaming 7 WIFI continued to dominate its peers, even beginning to approach the performance of the more powerful Intel X299-based system.
FutureMark PCMark 8
FutureMark Corporation’s PCMark 8 can be used to reliably ascertain a system’s performance in a Windows 10-based use environment. The benchmark tests chosen for benchmarking included the Home test, Creative test, and Work test. All test suites within the PCMark 8 benchmark were run three times, with the highest reproducible PCMark scores recorded. Note that the Applications test results were not included because of issues encountered between the benchmark and the version of Windows 10 used for testing.
The X470 Gaming 7 WIFI motherboard performed well in this real-world application benchmark, but did not make the strong performance leap as seen in the other multimedia benchmarks. However, the board remains a very strong competitor in comparison to the other test systems.
“Support for NVIDIA® Quad-GPU
“Support for NVIDIA® Quad-GPU SLI™ and 2-Way NVIDIA® SLI™ technologies
Support for AMD Quad-GPU CrossFire™ and 2-Way AMD CrossFire™ technologies”
With only 3 PICe X16 Slots(Whatever electrical) how is Quad GPU SLI/CF support possible on this MB? Can this board somehow be plugged into the Delorean and initiate time travel, it sure has enough LED Bling to qualify as a prop for a 1980s SIFI comedy.
No you dr emmet brown
No you dr emmet brown wannabe. Quad sli/xfire is for cards that have TWO gpus on each card, like the titanz( must say titanz wirh heavy german accent) or like the 295 x2.
Really the drivers are going
Really the drivers are going to abstract away Ze dual GPUs on Ze one PCIe card mostly so that’s not what CF/SLI is about. AMD’s CF uses XDMA while Nvidia uses a hardware bridge. But still you are wrong about this MB as it has only 3 PCIe x16(whatever electrical slots) and some folks in the past have had 4 of those Dual GPU on one PCIe card SKUs on a single system. This MB can not support 4 different cards at the same time so that’s just BS on your part!
CF and SLI are still not very good at milti-GPU load balancing but maybe with DX12/Vulkan and that explicit GPU Multi-Adaptor managed by these new Graphics APIs and some games programmers that are competent and not whining script Kiddies then there can be more progess. It should not be a problem for most GPUs that can work with DX12/Vulkan to have proper APIs developed to hand hold the stupid script kiddies hands and automate the process of proper GPU load balancing under DX12/Vulkan or even Apple’s metal. Poor little “programmers” so wedded to OpenGL’s complex and software abstracted state machine design that they can not deal with any GPU metal. But that’s OK as there will be middleware and Game Engine’s SDKs to help.
CF/SLI is not so good for games because of all that single threaded latency issues in dealing with milti-GPUs but really GPUs are parallel beasts and newer CPUs are getting way more cores and threads on mainstream CPU SKUs. So with proper programmers and DX12/Vulkan/etc that can be fixed over time. Nvidia sure is not receptive to more than 2 GPUs for SLI and AMD needs to maybe go back to using Bridge Connectors instead of XDMA and make use of Infinity Fabric instead. Nvidia has NVLink that it could speak across its bridge connectors but Nvidia appears to not be as interested in muiti-GPU uasge for gaming just yet.
The entire gaming/gaming engine industry mostly is really not taking the time to properly hide the latency issues with their games and are relying too much on the CPU and GPU makers to throw ever more powerful hardware their way so they do not have to worry about optimizing PC games as much as the console games/gaming engine makers have to do in order to eke out every last bit of performance on those consoles relatively weak hardware.
Really both AMD and Nvidia maybe need to slow down on the New hardware features and spend more time optimizing their GPUs firmware/driver and API support but Nvidia makes loads of dosh with its new hardware sales at the expense of its older GPU hardware while maybe AMD open sourceing most of their Vulkan driver development will see some Older AMD hardware(GCN 1.2/later) continue to net performance gains over time.
Poor AMD(At the Time) bit off more than they could chew trying to get That Implicit primitive shader API layer to work for legacy games that are not written to take advantage of the Explicit Primitive Shader hardware in AMD’s Vega GPU micro-arch. But gaming engine makers are still free to target Vega’s Explicit Hardware Primitive Shaders even if that’s going to not catch on as soon as AMD had hoped for PC gaming. Maybe the Open Source community can get around to targeting Vega’s explicit primitive hardware shaders or that Chinese Console maker that’s using That New AMD Semi-Custom Zen/Vega APU. Once the Console Makers switch over to all Zen/Vega based console hardware you can be damned sure that they will target Vega’s Explicit Primitive Hardware shaders and Rapid Packed Math/etc.
The marketing wank is “NVIDIA
The marketing wank is “NVIDIA Quad-GPU SLI”. It is not “NVIDIA® Quad-card SLI”. How do you get Quad-GPU SLI on a system that features 2-way SLI? Get two graphics cards with two GPUs each, and there you have Quad-GPU SLI. Also, from the horses mouth: http://www.nvidia.com/object/slizone_quadsli.html
So yes, that “dr emmet brown wannabe” is right, you annoying brat…
Oopsie, the
Oopsie, the “Anonymousnameisalreadyused” was right, not the “dr. emmet brown wannabe”… Argh…
thanks for the review
morry,
thanks for the review
morry, do you know what the ‘EDC %’ is at stock and when overclocking? ryzen master monitors this metric
i am running a 2700x on a asrock fatality mini itx 470 in an in win 901 case and at stock ‘EDC’ is hitting max, so i am assuming that is why it is stuck at around 3900 on all cores when running cinebench
it could be temps as well, but the noctua i am using is excellent, and it is the same with the cooler master aio i tried before the noctua
i think the issue is that the vrm is not beefy enough to fully max out the cpu because i believe ‘EDC’ is the max current the vrm is able to handle
Just a slight correction you
Just a slight correction you might want to make in the Features and Motherboard Layout section. I was a bit confused when I read the below, so I doubled checked this in the manufacturers manual.
Note that the port M2A_SOCKET and the tertiary PCIe x16 slot share bandwidth. The PCIe x16 slot is disabled with an M.2 drive seated in that port.
This should read that the “M2B_SOCKET and the tertiary PCIe x16 slot share bandwidth.”
Sourced from the manufacturers manual, Page 7, Expansion Slots section:
1 x PCI Express x16 slot, running at x4 (PCIEX4)
* The PCIEX4 slot becomes unavailable when a device is installed in the M2B_SOCKET connector.
Hope this clears up any confusion.
Thanks for pointing this
Thanks for pointing this out. It has been updated…
Any thoughts on getting
Any thoughts on getting around the M.2 80mm slot performance problem by using a PCI-E 3.0 compliant adapter card in the second 16x slot? I know this would drop the first two slots to 8x speeds, but most real world bench marking seems to suggest only little performance loss overall if a graphics card is in the first slot?
Anyone think it’s worth the trade off?
worth it if you need to run
worth it if you need to run two or more M.2s in raid mode. You won't see much if any performance loss between 16x and 8x on the video card unless you are running 4k most likely….
Thanks for the reply on this
Thanks for the reply on this one Morry.
One more question I had was
One more question I had was around RAM and this board. Given what you noted in the review about the memory speeds and this board, is there much point in going above DDR4-3200? I’m planning to overclock my Ryzen 2700X to around 4.2 GHz paired with a GTX 1080Ti. I had been looking at some Corsair Vengeance DDR4-3600 up until I read through the review. Thoughts?
no, not much point going
no, not much point going above stock speeds on memory, you see little improvement performance wise. Best to try to maximize your core speeds…
Appreciate the quick reply
Appreciate the quick reply again Morry!