It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show, Josh is back in the hot seat:
00:59 – PCPer studio audio interfaces? Thunderbolt 3 on chipset?
03:39 – PCPer microphones?
04:54 – PCPer audio podcast vs. YouTube video sound quality?
06:03 – 144Hz HDR at 1440p?
07:33 – Running ASUS 144Hz 4K HDR monitor at 1080p?
09:25 – PCIe 4.0 availability?
11:17 – HBM2 vs. GDDR6 for next-gen GPUs?
13:07 – Ryzen APU stuttering?
14:03 – "So, when's Tom Petersen coming back?" *wink wink*
14:43 – Intel skipping 10nm for 7nm?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
IBM/OpenPower Power 9
IBM/OpenPower Power 9 processors support PCIe 4.0 and Intel is not the first there. And Laptops need PCIe 4.0 before Desktops and the Laptop OEMs will see their market continue to stagnate if they do not start adopt PCIe 4.0 especially with SSDs that need 4 PCIe lanes becoming more popular! Ditto for the USB 3.1 generation 2 10Gb/s requirements and USB 3.2 is ready that makes use of 2, USB 3.1 gen 2 channels(Link Bonded/Link Aggrigated to provide a total of 20Gb/s of bandwidth. So USB 3.2 and 20Gb/s for laptops is not going to be possible with the limited number of PCIe lanes on laptops unless the MB chipset makers/laptop OEMs start to adopt PCIe 4.0.
One would think that Apple would be all over Intel trying to get PCIe 4.0 for It’s Macbooks’ TB3 and SSD bandwidth requirements and Laptop MB’s can be simpler and easier to make and smaller with Faater PCIe standards that offer more bandwidth per PCIe lane.
AMD is rumored to be adding PCIe 4.0 support to its Epyc/Rome Zen 2 based SOCs so maybe that’s going to be possible in 2019 once AMD moves over to Zen 2 for all of its CPU/APU offerings. All those SERDES lanes on the Zen/Zeppelin Die are already faster than PCIe anyways its just a matter of hanging a PCIe 4.0 controller/PHY off of the SERDES and getting the MB makers UP to standard for PCIe 4.0 later.
I think he meant in x86-land
I think he meant in x86-land for the most part WRT the question and answer. There is always someone somewhere putting something out in special markets that beats Intel or AMD but its usually not compatible with x86 nor is it something that is affordable either.
Laptops based on either Intel or AMD can already get 4x PCIe lanes and PCIe3.0 is plenty fast for current common and affordable SSD memory. Higher end stuff like Optane could make real use of PCIe4.0 but its too expensive to use in volume still. Same goes for USB3.1g2. Biggest potential application for PCIe4.0 in x86-land is in servers really not laptops or desktops either.
For AMD PCIe4.0 potentially could improve inter-chip bandwidth and latency on their MCM Epyc CPU’s to be a big boost in performance since the IF bus being used there is essentially based off of PCIe. For the avg. user not much will change with PCIe4.0.
Really Power9’s are not a
Really Power9’s are not a special market as that’s server/cloud and Power9/Nvidia eating Intel’s linch in the top end HPC design wins, just the Sierra supercomputer!
And you are bing more than disingenuous about PCIe lanes on Laprops what with all the problems with some Dell laptops only providing 2 PCIe 3.0 lanes to the TB3 controller on the laptop and that not being enough PCIe connectivity. And Laptops are needing more bandwidth to support USB 3.1 Gen2 and even USB 3.2 in addition to maybe more laptops getting TB3 support and their owners able to make use of some Desktop GPUs in external adaptors.
“For AMD PCIe4.0 potentially could improve inter-chip bandwidth and latency on their MCM Epyc”
Hey there bubba AMD uses SERDES for inter-die communcation not PCIe and maybe you need to bookmark wikichip and do a little reading before you spout off about things! By the way ThreadRipper is MCM also, so that’s just your lack of knowledge there also.
Edit: linch
To: Lunch
Edit:
Edit: linch
To: Lunch
Edit: Bing
To: Being
POWER9 is pretty much HPC
POWER9 is pretty much HPC only these days and server/cloud is dominated by x86. The Sierra supercomputer is getting most of its performance from NV GPU’s and not the IBM POWER9 CPU’s too BTW.
That is Dell being cheap is all. Its not due to a practical lack of PCIe lanes. USB3.1g2 also generally isn’t short of lanes either since the only major use of PCIe lanes in most laptops and desktops is a discrete GPU and quite a few laptops and desktops don’t even have one. They use a iGPU instead.
SERDES is a near meaningless generic term (it applies to 10Gbe Ethernet, at least some Intel USB3.1 chips*, as well as PCIe4.0 for instance) when talking about the buses AMD Epyc uses. The IF bus itself that is being used inter package and inter socket is based on PCIe, that is a big part of the reason of why latency is so high there. And why its compatible with the PCIe bus too if need be.
*read page 32 of this .pdf: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/phy-interface-pci-express-sata-usb30-architectures-3.1.pdf
The Serdes on AMD’s
The Serdes on AMD’s Processors is much faster and comes in partnership with Synopsys(Already has demoed Working USB 3.2 silicon). And that Serdes is much faster than PCIe 3.0 so PCIe is usually much slower than most Processor’s internal data fabrics.
“The Infinity Scalable Data Fabric (SDF) employs two different types of SerDes links – Infinity Fabric On-Package (IFOP) and Infinity Fabric InterSocket (IFIS). ”
(1)
“Infinity Fabric (IF) – AMD
https://en.wikichip.org/wiki/amd/infinity_fabric
Again SERDES is a type of bus
Again SERDES is a type of bus and a rather generic description at best. A lot of modern buses are SERDES types. Its common.
And that the IF bus(es) is faster than PCIe3.0 doesn’t mean it can’t be based off of it either. Again its completely compatible with PCIe if need be. There is only one way that is possible.
Josh:
there are some rumors
Josh:
there are some rumors (old ones but from good sources) that Intel’s 7nm process has been delayed to beyond 2020 and probably will be out closer to 2022. TSMC will probably be well along on getting their improved 5nm process ready, if not already producing parts on it, by that time frame. There is also some great information on future sub 5nm process in a recent article over at Semiconductor Engineering* that unfortunately makes a really good case for process development pretty much halting at 3nm. Anyways question(s) is: do you think its correct at this point to assume that Intel has pretty much permanently lost their much if not all of their highly vaunted process lead and will now be forced to compete on design merits alone for the most part going forward? Or do you think they can pull a ace out of the hole at the last moment? I have to say that to me it seems that while they’re not going away anytime soon they’ve also set themselves up for losing their overwhelmingly dominate position in the x86 CPU market over time.
Thanks
*https://semiengineering.com/big-trouble-at-3nm/
Whatever happened to Nvidia’s
Whatever happened to Nvidia’s Simultaneous Multi-Projection (SMP) that was supposed to fix distortion on multi-screen? We’re 2 years in, and the only game I’m aware of that supports it is iRacing. When Nvidia first announced it, they were really talking it up and acting like they were going to really push developers on it, and it just seems like vaporware now.
[copy of my COMMENT at
[copy of my COMMENT at YouTube]
Josh, Very good answer about PCIe 4.0. In the future, you might consider extrapolating about the future of PCIe SSD storage, given a raw clock rate that doubles from 8 GHz to 16 GHz. In theory (at least), this doubled clock rate should benefit bleeding-edge technologies like the ASUS DIMM.2 slot, and 4×4 add-in cards like the ASRock Ultra Quad M.2 card, and the ASUS Hyper M.2 x16 card. I foresee an ASUS DIMM.2 add-in card also with room for 4 x M.2 NVMe SSDs. Likewise, at PCIe 4.0 the existing ASUS DIMM.2 add-in card should perform much like the current 4×4 add-in cards, assuming future M.2 NVMe SSDs take advantage of the extra headroom. Again, many thanks. (Montana rocks / Josh rocks Montana / choose one :)
EDIT: Make that Wyoming! (sorry for the typo! but, Josh rocks Montana too, imho)
p.s. Can we interest anyone
p.s. Can we interest anyone in a Patreon toupe fund for Josh?
Once upon a time, God created
Once upon a time, God created the perfect head, the perfect shape, the perfect color, everything was perfect, and the head was Josh, and it was good.
And to all those with ugly, misshapen Non-Joshtekkishy scalps, God granted us hair, so that we may cover our shame.
So I gots a question for the
So I gots a question for the show. I recently came into a nice bonus from work, and got some beautiful upgrades to my rig, including a 1440×3440 freesync monitor and a spankin new rx580 to power it.
This is the most powerful card I’ve ever had, and it has a back-plate on it. I’ve never had a back-plate before, and while it’s pretty, I have no idea what it’s for or why it’s there.
So I ask you, PcPer, why is those things?