It's time for the PCPer Mailbag, our weekly show where Ryan and the team answer your questions about the tech industry, the latest and greatest GPUs, the process of running a tech review website, and more!
On today's show:
00:38 – 6-bit monitors with HDR?
03:43 – Threadripper 2800X?
06:03 – When to replace power supply?
08:07 – Secured Wi-Fi networks with shared password?
09:38 – Universal VR standard?
12:46 – Samsung 960 successor?
13:55 – Hybrid Memory Cube technology?
16:37 – GeForce Partner Program updates?
19:10 – Game-optimized GPU drivers?
Want to have your question answered on a future Mailbag? Leave a comment on this post or in the YouTube comments for the latest video. Check out new Mailbag videos each Friday!
Be sure to subscribe to our YouTube Channel to make sure you never miss our weekly reviews and podcasts, and please consider supporting PC Perspective via Patreon to help us keep videos like our weekly mailbag coming!
Question for next time: Can I
Question for next time: Can I cut and remove leads from my ATX power connector?
Backstory: I have a tiny case, and the ATX cable is the most difficult one to get to behave. Now, when the ATX standard was created, CPUs and motherboards didn’t have the sophisticated VRMs on board they have now and thus relied on the PSU to provide 3.3V and 5V etc. I’m wondering if this is still the case, or if maybe only the 12V and a few grounds will suffice.
This article says it all
This article says it all about HBM vs HMC and the author states:
“Strictly from a point of view of trying to understand the family relationships between these, I’ve sketched something of a family tree below based on my interwebs ferreting. Specifically, the HBM side of things is targeted largely at improving graphics processing, while HMC is intended more for servers in data centers, to oversimplify.” (1) [See article Graphic]
Read the full article as there are differet use cases for HBM2 and HMC. The author also does not go into overall power usage of these respective memory stacking technologies so that’s maybe something to look into also as JEDEC HBM2 has those 1024bit(Subdivided into 8 independent 128 bit channels) wide interface per HBM2 stack. Remember the wider the parallel interface the lower the clocks need to be to achieve the same effective bandwidth so lower clocks translate into lower power usage and less need for error correction compared to higher clock rates on narrower interfaces.
HMC is still around and has its own use case and the author also states:
“Another difference reflects the backers and suppliers: HMC has pretty much only Micron as a supplier these days (apparently, HP and Microsoft were originally part of the deal, but they’ve backed out). HBM is a Hynix/AMD/Nvidia thing, with primary suppliers SK Hynix and Samsung.” (1)
(1)
“January 2, 2017
HBM vs. HMC
Comparing Cubes
by Bryon Moyer ”
https://www.eejournal.com/article/20170102-hbm-hmc/
What is the best portable
What is the best portable computing device for about $300?
I have a good desktop for most of my computing needs so I don’t need anything that powerful, just web-surfing, email, and occasionally typing up a document on the go. Are Chromebooks the best option; is Google doing a better job of supporting them than Android devices? Are the inexpensive Windows PCs actually usable with 4GB of RAM? Is the new base iPad a reasonable choice?
Very wide order superscalar
Very wide order superscalar Custom ARM cores and laptops is my question on this subject. When will the Custom ARM chips makers start to make inroads into the laptop Market with their cuctom ARM cores that are similar to Apple’s A11 Monsoon or the Samsung M3 Mongoose core(1) which is just as wide order superscalar as Apple’s A11.
What about AMD’s Jim Keller Managed K12 Project’s custom ARM design that appears to be on the back burner and its custom ARMv8A ISA running design features. If Microsoft’s Windows on ARM systems take off what about AMD’s K12 and these custom ARM core designs from Both Apple and Samsung that look more on the inside like a x86 wideer order suparscalar core with all the execution resources. Keller is gone but the IP/ for K12 remains for AMD to make use of.
That Samsung M3(Mongoose) core design is very beefy compared to Apple’s and I wish that more was actually Known about AMD’s custom K12 ARM cores as Keller appeared to imply in those YouTube interviews that K12 may have SMT capabilities nad be very similar in design to Zen on the inside(Cache subsystems, FP, INT and SMT/other resources).
If Apple begins to move over to using its A11/newer cores for it’s lower end macbooks what About AMD’s K12 maybe paired with Vega/Newer graphics and Samsung M3 Mongoose custom cores that are not to be confused with any ARM Holdings refrence design cores that are mostly being use as the little cores next to Samsung’s and Qualcomm’s bigger and more powerful custon cores.
What about all that Extra DSP/AI IP on Qualcomm’s and Apple’s Custom SOCs in addition to Apple’s custom GPU IP that’s now going to be used instead of IT’s powerVR IP. Will the x86 makers have to start including more AI/DSP features on their mobile CPU/APU cores also. What will be Intel’s and AMD’s responce if the x86 based designs start to be pushed out of some laptop/mini-devices in favor of some custom ARM designs.
AMD still has the K12 IP on hand to respond quicker than Intel if that’s needed but what if the custom ARM designs get a boost from Apple maybe going in-house for it’s mackbooks and others start to take notice. Microsoft is sure hedging its bets on custom ARM and Windows.
(1)
“Mongoose 3 (M3) – Microarchitectures – Samsung”
https://en.wikichip.org/wiki/samsung/microarchitectures/mongoose_3
Question for Allyn.
How do we
Question for Allyn.
How do we measure queue depth for single thread. Isn’t disk IO blocking i.e. a requesting thread must wait until it has received acknowledgement? I get confused when I hear QD16-T1. I do not understand how its possible. The way I understand it, a thread can only make one IO request “At a given time”, wait for ack and then make another. What am I missing?
Thanks in Adv.