It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show:
00:30 – PCPer gear!
01:06 – External Thunderbolt 3 enclosure for something other than GPU?
02:55 – SoCs to replace desktop CPUs and GPUs?
05:35 – AMD EPYC server marketshare?
08:15 – Optane vs. NVMe with Threadripper CCX latency?
10:33 – Using USB-C laptop charger with smartphone?
12:39 – Spectre/Meltdown security concerns for everyday user?
14:44 – NVIDIA architecture names?
18:06 – Where are all the DirectX 12 PC games?
19:58 – 12TB Western Digital Reds?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Thumbnail caption:
I like big
Thumbnail caption:
I like big butts and I cannot lie
Any more updates on new
Any more updates on new freesync 2 or g-sync monitors with proper HDR suppport? Since GPU’s are still hard to come by at the right price, when can we expect better displays that we can purchase in the meantime, or are these two linked where the manufacturer’s don’t see the GPU’s going to PC gamers, but instead to miners and therefore don’t release better monitors?
The SOC doesn’t need to do
The SOC doesn’t need to do the heavy lifting one windows on ARM and 5G are combined. We will all have super computers on our cell phones.
Could AMD put out a Vega GPU
Could AMD put out a Vega GPU that requires desktop memory to work? Like could they double the Vega 11 on the R5 2400G, keep the memory controller, and have a Vega 22 that uses DDR4 as a desktop GPU?
Would it be too bottle necked by memory? I’d hope the cost would be around $200(GPU) + $100 of high speed DDR4 4GB*2(possibly full desktop dimms)
Suppose it may only be worth it during these dark times of GPU drought.
Might be more of a Josh Question.
The old Matrox G200 did something like that I think.
https://en.wikipedia.org/wiki/Matrox_G200
A Vega GPU that uses DDR4
A Vega GPU that uses DDR4 system memory still needs some dedicated physical VRAM so Vega’s High Bandwidth Cache Controller can make use of any small amount of HBM2/other memory as a last level GPU/HBC(High Bandwidth Cache) if you want more than 11 Vega nCUs to be able to be supported. Vega’s HBCC/HBC(HBM2) IP will be more directly testable on the Vega Discrete Mobile variants that come with only 4GB of physical VRAM(HBM2). So If you took the time to read the Vega whitepaper(PDF) you should have known that using the System DRAM as a Virtual VRAM is the very reason that Raja/GPU Team created that HBCC/HBC(HBM2 used as cache) IP in the first place.
Also, it’s useless to use DDR4 as physical VRAM as that’s no better than using DDR4 from system DIMMs. So maybe AMD can redesign for use with GDDR5X/6 in small amounts, but not really because HBM2 is actually the way forward for APUs ever getting their own physical VRAM as the power/thermal requirements any GDDR5/5X/6 memory prevents its usage on SOCs/APUs.
So AMD with Vega can very well create a lower cost APU, or GPU, that say only comes with 1GB or 2GB of HBM2/eDRAM(Used as High Bandwidth Cache) with the remainder of the VRAM in Virtual form out on regular system DRAM or even paged to SSD/Hard drive.
You should also know that even with HBM2 is short supply it’s not the cost of even HBM2 that making AMD’s GPUs cost more than it is more that the Demand for AMD’s GPUs for compute(Mining, AI, other compute needs) that is leading to an increased demand for AMD’s GPUs.
The R5 2400G would need either HBM2, or even eDRAM, for the Vega HBCC/HBC IP to function and the APU able to support more than 11 Vega nCUs. And AMD has not chosen, or could not get certified, any APUs that make use of either eDRAM or HBM2 at this early of a time. But one would expect that AMD is working on an APU with HBM2 and it’s probably going to be a Professional/Workstation Grade APU at first. AMD’s problem currently is that AMD lacks the funds to incentivize Consumer laptop OEMs to use its APUs and do so with better Laptop Feature offerings.
You must realize that consumer grade OEM laptop Margins are so thin that OEMs have been made dependent of the CPU/SOC and OS suppliers for what little profits actually can be had from selling consumer laptops. And in reality Laptop OEMs are so dependent on the makers of CPU/SOC and OSs(proprietary) that they can barely afford to even design their consumer laptop SKUs. In reality Laptop OEMs are dependent on SOC/CPU and GPU makers with the ability to fund these laptop OEMs in an indirect manner. And that’s because of this type of business model being allowed to exist for over more than a decade that laptop OEMs are pretty much not in complete control over their SOC/CPU and GPU choices and they have to get support of the CPU/SOC and GPU makers who are most willing to fund things for the consumer laptop OEMs.
So AMD without the necessary funds to incentivize the laptop OEM would still have trouble getting even APUs with their Own HBM2 inside of consumer laptops. You do see that AMD’s current mobile APUs are still not very well represented in a large percentage of laptop SKUs currently on the market. There are business practices that consumer laptop OEMs have let themselves become under control of in the history of the microprocessor market and because of that history laptop OEMs are totally dependent on CPU/SOC and GPU makers incentivizing/underwriting the development of consumer laptops.
So for lack of legal/governmental enforcement over the decades the OEM consumer laptop market has become dependent on the suppliers of CPU/SOC and GPU parts in a financial manner to help the consumer laptop OEMs with some indirect funding to allow for the consumer laptop OEMs to even have a chance of making even the thin margins that OEMs make on the consumer laptop offerings. CPU/APU/SOC as well as GPU makers and OS makers have too much control over the OEM market in a reversal of what should be a normal market where the OEMs have all the control over their parts suppliers.
Professionla/workstation grade laptops(portable workstations) are not as much of a problem for OEMs that produce those professional SKUs as the end users are mostly able to easily pay the proper markups to allow the OEMs to be less dependent of GPU/SOC and GPU suppliers for financial assistance in creating the pro SKUs.
The problem with an APU with
The problem with an APU with HBM2 is the cost of the interposed. There is likely no point in making a product with a small amount of memory when the final product cost will be necessarily high. Go big or go home.
That’s why HBM2 for an APU
That’s why HBM2 for an APU will be done first for Workstation class APUs where the markups can be plenty high and the customer can write that business expence off on their taxes. So Individual Graphics professionals and large graphics businesses mostly before the economy of scale kicks in more fully for HBM2 and Interposers/Packaging and then things will be affordable enough to use for consumer APU products with HBM2.
Interposers do not cost that much as they are mmade up of the same material as any blank wafer and it’s the packaging of the dies on the interposer that’s where most of the cost comes from and that applying of micro-bumbs and defect control on the packaging process for interposer based devices. Etching the Interposer with traces can easily be done at 45nm/Above so that’s not really costly as the micro-bump application and then adding, securing, and testing the various dies and HBM2 die stacks.
Active Silicon Interposers are where things are going to get interesting for future devices where AMD could potentailly etch the entire Infinity Fabric’s control fabric and data fabric on the silicon interposer’s substrate including the IF’s transistors and logic circuits etched into the silicon interposer. So some moduar GPU would have all of its interconnecting data/control fabrics moved to the Interposer with more room on the processor dies attatched to that active fabric on the interposer for processor cores/shader cores and other logic.
Do you think desktop/laptop
Do you think desktop/laptop CPUs with get integrated neural network inference ASICs like phone SoC have? (Would this be an issue of devices not necessarily having integrated cameras? I’m not sure of the use case, but the phone use cases are gimmicky as it is.)
Microsoft announced hardware
Microsoft announced hardware accelerated machine learning in the next Windows 10 update… so I guess there might be a use-case in there.
These videos are so
These videos are so informative. It gives deep info about PCPer Mailbag. I will subscribe it on YouTube.
Buy YouTube Comments