It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show:
00:39 – Ryan's worst PC build?
03:18 – PCIe vs. USB sound card?
06:10 – AMD APU Infinity Fabric for GPU & CPU?
08:06 – SSD prices in 2018?
10:42 – Firmware upgrades for GPUs?
13:13 – Storage configuration for Premiere Pro editing?
16:11 – PC hardware for Star Citizen?
19:18 – 10-gigabit networking?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Question: What, if anything,
Question: What, if anything, are the big three manufactures, Samsung, SK Hynix, and Micron doing to address the memory shortage and why doesn’t someone like TSMC or GloFlo enter the memory fabrication market considering the big three have seen profit margins almost double in the last year.
AMD APU Infinity Fabric for
AMD APU Infinity Fabric for GPU & CPU?
The Infinity Fabric on discrete Vega is not tied to the memory clock. Also since Vega GPUs also use the infinity fabric there is that possibility of using the Infinity Fabric to wire 2 seperate Vega GPU DIEs together via the Infinity Fabric just as it’s done with Zen/Zeppelin dies and have the two GPU DIEs appear to software as one logical larger GPU. This S/A artcle(1) explains what and why AMD created the IF and how that will be what AMD will use for GPUs, in a similar manner to what AMD currently uses IF for with Zen/Zeppelin across all of AMD’s Ryzen/TR/Epyc SKUs.
Raven Ridge uses the IF also, and AMD will be including HBM2 on some future APU variants with that usage will probably be for some Workstation APU variants at first and will cost more in the beginning.
“On the surface it sounds like AMD has a new fabric to replace Hypertransport but that isn’t quite accurate. Infinity Fabric is not a single thing, it is a collection of busses, protocols, controllers, and all the rest of the bits. Infinity Fabric (IF) is based on Coherent Hypertransport “plus enhancements”, at a briefing one engineer referred to it as Hypertransport+ more than once. Think of CHT+ as the protocol that IF talks as a start.” (1)
.
.
.
“Going down to the metal, or at least metal traces, there isn’t one fabric in IF but two. As you can see the control fabric is distinct from the data fabric which goes a long way towards enabling the scalable, secure, and authenticated goals. Control packets don’t play well with congested data links, and security tends to work better out-of-band too. QoS also play better if you can control it external to the data flows. So far IF seems to be aimed right.” (1)
With the Infinity Fabric IP in Place AMD could do with Vega the very same thing that AMD has done with Zen/Zeppelin and will be doing with Navi, and that’s coherently(cache, control, and data) tie CPU core complexes/dies or GPU dies/shader cores together. So that modular die sort of Zen/Zeppelin “Glue” ability is already baked into Vega also! And for Navi it will just be that the Modular Navi die/chiplets will be smaller than any Vega dies currently in production. AMD could if it wanted use the Infinity Fabric on a Dual Vega Die on a PCIe card variant but Navi is going to arrive in 2019 ragardless.
(1)
“AMD Infinity Fabric underpins everything they will make”
https://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/
Thanks for the details. It
Thanks for the details. It was part of the reason I asked the team about the integration.
And the other part that I was slightly concerned was that their APU integration would reduce the PCIE lanes to other devices. but since they are most likely using IF, then I hope they will leave PCIE lanes as on the full Ryzen?
It would make a good rendering system to gain extra rendering power.
APUs, Laptop APUs, can not
APUs, Laptop APUs, can not get PCIe 4.0 soon enough as there are now TB3 and PCIe(x4) SSD drives to consider on laptops. I have realized why PCs are always the first to get the latest connection(USB, TB, other) standards while Laptops do not and that’s beacuse laptops never get enough PCIe lanes and Bandwidth.
Laptops should be getting PCIe 4.0 before Desktops as laptops are the most PCIe lane constrained. TB3 on laptops can not be guaranteed to be provided with enough PCIe lanes to support the TB3 controller’s full bandwidth needs on laptops and really PCIe 4.0 needs to be rolled out for laptops first.
Intel and AMD really need to provide for their mobile CPU/Graphics SKUs to have more PCIe lanes offered from the chipsets because that’s what really led to TB3 having a wider adoption rate as on Desktops than laptops, Apple’s MacBooks not included. But with Intel trying to open up it’s TB3 standard to a wider market it’s not going to happen for non Apple laptops because of the limited numbers of PCIe 3.0 lanes offered on Both Intel’s and AMD’s Mobile/laptop SOC/APU products.
On laptops, if AMD’s APUs ever get HBM2 included, then that’s going to help but for the discrete mobile Vega that will come with 4GB of HBM2 the discrete mobile Vega variants will need more PCIe bandwidth if they make use of Vega’s HBCC/HBC(HBM2) IP for virtual VRAM. With Vega/HBM2 the HBCC has to have some method of transfering virtual VRAM over the PCIe lanes to get at the syetem memory for virtual VRAM paging accesses. So that’s something that AMD has never been clear about describing exactly how that process works. But Discrete Mobile Vega with 4GB of HBM2 is going to be available shortly and AMD really needs to provide some whitepaper guidence on how HBCC/HBC(HBM2) will happen for Disctere mobile Vega on Laptops with limited PCIe connectivity.
At least on Intel’s CPU/Vega semi-custom Discrete Die on that EMIB(Embedded Multi-die Interconnect Bridge) based MCM SKUs that will also make use of HBM2 and Vega’s HBCC/HBC IP. So that can be be tested with Vega’s HBCC/HBC using HBM2 as cache to see what bandwidth utilization will be had on that PCIe 3.0 x8 MCM connection used to interface the Vega/HBM2 die to the Intel SOC on that MCM and how that may workout for any discrete mobile Vega GPUs with HBM2 used on laptops that do not make use of Intel’s EMIB/MCM SKUs.
Allyn has mentioned a couple
Allyn has mentioned a couple times that a lot of the M.2 heat-sinks aren’t optimal for NVMe drives because most of them cool the flash, which benefits from the higher temperatures, instead of just the controller. What about for XPoint? Does it’s memory see similar endurance gains from running warmer like NAND, or would it be possible the cooling would benefit XPoint?
QUESTION: The recently
QUESTION: The recently released Gigabyte X299 Designare EX motherboard lists “Support for Registered DIMM 1Rx8/2Rx8/1Rx4/2Rx4 memory modules (operate in non-ECC mode).” What user workload scenarios would benefit from ultilizing that memory setup?
Question:
Current
Question:
Current limitations for external GPU’s appear to be frame pacing and not enough bandwidth to support high-framerate/refresh gaming. Both of which I assume are issues that can be resolved with a faster interface.
Assuming we can lessen these limitations, do you see a bigger future for external GPU’s or will it remain niche?
Ryan answered almost this
Ryan answered almost this exact question in episode #15. Here’s the link, question answered at 6:40…
https://www.youtube.com/watch?v=-jDsXefwggQ
Question: We’re likely to see
Question: We’re likely to see the release of the new AMD desktop APUs in the next week or so (I bet you even already have them). When will we see AM4 motherboards with HDMI 2.0? These systems are perfect for HTPC applications, but if you want a 4K TV connected, you’re limited by HDMI 1.4.
P.S. Please wield your immense industry influence to serve my needs! ; )