Overclocking and Conclusion
Overclocking
To give a feel for the overclocking performance potential of the X470 Gaming 7 WIFI motherboard, we attempted to push it to known CPU-supported performance parameters with minimal tweaking. At the stock base clock speed of 100Mhz, we were able to get the system stable at a 4.3GHz CPU speed across all cores and 2667MHz memory speed. This was done at a 1.45V CPU voltage, a 1.35V VCORE SOC voltage, a 2.12V CPU 0VDD18 voltage, and a +0.20 CPU VDDP voltage with all other values left at default settings. Unfortunately, the system would not stabilize at any CPU or memory speeds greater than those listed above. All overclocking sessions remained stable for over 4hrs. System stability was tested running the AIDA64 stability test in conjunction with EVGA's OC Scanner X graphical benchmark running at 1280×1024 resolution and 8x MSAA in stress test mode. Note that 8GB (2 x 4GB) of Corsair Dominator Platinum DDR4-4000 memory modules were used for the overclocking tests.
100MHz Base Clock Stats with 4.3GHZ CPU and 2667MHz memory speed
Note that this is is meant only as a quick preview of the board's performance potential. With more time to tweak the settings to a greater extent, pushing to a higher base clock and ring bus speed may have been achievable, in addition to an overnight stability run without issue.
Pricing
As of August 01, the GIGABYTE X470 AORUS Gaming 7 WIFI motherboard was available from Amazon.com for $228.49 with Prime shipping. The board was also available from Newegg.com for $228.63.
Conclusion
The X470 AORUS Gaming 7 WIFI motherboard is another winning product in GIGABYTE's AORUS motherboard line. Like its predecesor, the board integrates all the same features that have come on the latest revisions of the Intel boards including integrated RGB LEDs and external RGB LED strip support, top notch sound, and a wide variety of storage options (including its PCIe x4 M.2 slots). Further, its integrated chipset and CPU-controlled USB 3.1 and 3.0 ports perform much better than the ASMedia controller variants. It features a simple black and white aesthetic, easily finding a home in any enthusiast build. It performs as well as it looks both at stock speed with some oddities encountered while overclocking. The processor overclock speed over all cores achieved a respectable 4.3 GHz. However, its memory speed remained firmly capped at 2667MHz with the modules used for testing.
Strengths
- Stock performance
- Overclocking potential
- Board aesthetics, layout, and design
- UEFI BIOS design and usability
- Variety of storage solution support including SATAand M.2
- Intel-based network offerings – GigE and 2×2 802.11ac WiFi adapter ports
- PCIe x1 slot 1 usable with dual slot video card seated in primary PCIe x16 slot
- Configurable RGB LEDs using RGB Fusion through both UEFI and Windows app
- CMOS battery placement
Weaknesses
- 80mm M.2 slot used PCIe 2.0 bus, limiting upper device speed to 1600 MB/s
- Unable to run memory at speeds above 2667MHz
“Support for NVIDIA® Quad-GPU
“Support for NVIDIA® Quad-GPU SLI™ and 2-Way NVIDIA® SLI™ technologies
Support for AMD Quad-GPU CrossFire™ and 2-Way AMD CrossFire™ technologies”
With only 3 PICe X16 Slots(Whatever electrical) how is Quad GPU SLI/CF support possible on this MB? Can this board somehow be plugged into the Delorean and initiate time travel, it sure has enough LED Bling to qualify as a prop for a 1980s SIFI comedy.
No you dr emmet brown
No you dr emmet brown wannabe. Quad sli/xfire is for cards that have TWO gpus on each card, like the titanz( must say titanz wirh heavy german accent) or like the 295 x2.
Really the drivers are going
Really the drivers are going to abstract away Ze dual GPUs on Ze one PCIe card mostly so that’s not what CF/SLI is about. AMD’s CF uses XDMA while Nvidia uses a hardware bridge. But still you are wrong about this MB as it has only 3 PCIe x16(whatever electrical slots) and some folks in the past have had 4 of those Dual GPU on one PCIe card SKUs on a single system. This MB can not support 4 different cards at the same time so that’s just BS on your part!
CF and SLI are still not very good at milti-GPU load balancing but maybe with DX12/Vulkan and that explicit GPU Multi-Adaptor managed by these new Graphics APIs and some games programmers that are competent and not whining script Kiddies then there can be more progess. It should not be a problem for most GPUs that can work with DX12/Vulkan to have proper APIs developed to hand hold the stupid script kiddies hands and automate the process of proper GPU load balancing under DX12/Vulkan or even Apple’s metal. Poor little “programmers” so wedded to OpenGL’s complex and software abstracted state machine design that they can not deal with any GPU metal. But that’s OK as there will be middleware and Game Engine’s SDKs to help.
CF/SLI is not so good for games because of all that single threaded latency issues in dealing with milti-GPUs but really GPUs are parallel beasts and newer CPUs are getting way more cores and threads on mainstream CPU SKUs. So with proper programmers and DX12/Vulkan/etc that can be fixed over time. Nvidia sure is not receptive to more than 2 GPUs for SLI and AMD needs to maybe go back to using Bridge Connectors instead of XDMA and make use of Infinity Fabric instead. Nvidia has NVLink that it could speak across its bridge connectors but Nvidia appears to not be as interested in muiti-GPU uasge for gaming just yet.
The entire gaming/gaming engine industry mostly is really not taking the time to properly hide the latency issues with their games and are relying too much on the CPU and GPU makers to throw ever more powerful hardware their way so they do not have to worry about optimizing PC games as much as the console games/gaming engine makers have to do in order to eke out every last bit of performance on those consoles relatively weak hardware.
Really both AMD and Nvidia maybe need to slow down on the New hardware features and spend more time optimizing their GPUs firmware/driver and API support but Nvidia makes loads of dosh with its new hardware sales at the expense of its older GPU hardware while maybe AMD open sourceing most of their Vulkan driver development will see some Older AMD hardware(GCN 1.2/later) continue to net performance gains over time.
Poor AMD(At the Time) bit off more than they could chew trying to get That Implicit primitive shader API layer to work for legacy games that are not written to take advantage of the Explicit Primitive Shader hardware in AMD’s Vega GPU micro-arch. But gaming engine makers are still free to target Vega’s Explicit Hardware Primitive Shaders even if that’s going to not catch on as soon as AMD had hoped for PC gaming. Maybe the Open Source community can get around to targeting Vega’s explicit primitive hardware shaders or that Chinese Console maker that’s using That New AMD Semi-Custom Zen/Vega APU. Once the Console Makers switch over to all Zen/Vega based console hardware you can be damned sure that they will target Vega’s Explicit Primitive Hardware shaders and Rapid Packed Math/etc.
The marketing wank is “NVIDIA
The marketing wank is “NVIDIA Quad-GPU SLI”. It is not “NVIDIA® Quad-card SLI”. How do you get Quad-GPU SLI on a system that features 2-way SLI? Get two graphics cards with two GPUs each, and there you have Quad-GPU SLI. Also, from the horses mouth: http://www.nvidia.com/object/slizone_quadsli.html
So yes, that “dr emmet brown wannabe” is right, you annoying brat…
Oopsie, the
Oopsie, the “Anonymousnameisalreadyused” was right, not the “dr. emmet brown wannabe”… Argh…
thanks for the review
morry,
thanks for the review
morry, do you know what the ‘EDC %’ is at stock and when overclocking? ryzen master monitors this metric
i am running a 2700x on a asrock fatality mini itx 470 in an in win 901 case and at stock ‘EDC’ is hitting max, so i am assuming that is why it is stuck at around 3900 on all cores when running cinebench
it could be temps as well, but the noctua i am using is excellent, and it is the same with the cooler master aio i tried before the noctua
i think the issue is that the vrm is not beefy enough to fully max out the cpu because i believe ‘EDC’ is the max current the vrm is able to handle
Just a slight correction you
Just a slight correction you might want to make in the Features and Motherboard Layout section. I was a bit confused when I read the below, so I doubled checked this in the manufacturers manual.
Note that the port M2A_SOCKET and the tertiary PCIe x16 slot share bandwidth. The PCIe x16 slot is disabled with an M.2 drive seated in that port.
This should read that the “M2B_SOCKET and the tertiary PCIe x16 slot share bandwidth.”
Sourced from the manufacturers manual, Page 7, Expansion Slots section:
1 x PCI Express x16 slot, running at x4 (PCIEX4)
* The PCIEX4 slot becomes unavailable when a device is installed in the M2B_SOCKET connector.
Hope this clears up any confusion.
Thanks for pointing this
Thanks for pointing this out. It has been updated…
Any thoughts on getting
Any thoughts on getting around the M.2 80mm slot performance problem by using a PCI-E 3.0 compliant adapter card in the second 16x slot? I know this would drop the first two slots to 8x speeds, but most real world bench marking seems to suggest only little performance loss overall if a graphics card is in the first slot?
Anyone think it’s worth the trade off?
worth it if you need to run
worth it if you need to run two or more M.2s in raid mode. You won't see much if any performance loss between 16x and 8x on the video card unless you are running 4k most likely….
Thanks for the reply on this
Thanks for the reply on this one Morry.
One more question I had was
One more question I had was around RAM and this board. Given what you noted in the review about the memory speeds and this board, is there much point in going above DDR4-3200? I’m planning to overclock my Ryzen 2700X to around 4.2 GHz paired with a GTX 1080Ti. I had been looking at some Corsair Vengeance DDR4-3600 up until I read through the review. Thoughts?
no, not much point going
no, not much point going above stock speeds on memory, you see little improvement performance wise. Best to try to maximize your core speeds…
Appreciate the quick reply
Appreciate the quick reply again Morry!