UPDATE: If you missed the live stream yesterday, I have good news: the interview and all the information/demos provided are available to you on demand right here. Enjoy!
Last week NVIDIA launched GM204, otherwise known as Maxwell and now branded as the GeForce GTX 980 and GTX 970 graphics cards. You should, of course, have already read the PC Perspective review of these two GPUs, but undoubtedly there are going to be questions and thoughts circulating through the industry.
To help the community get a better grasp and to offer them an opportunity to ask some questions, NVIDIA's Tom Petersen is stopping by our offices on Thursday afternoon where he will run through some demonstrations and take questions from the live streaming audience.
Be sure to stop back at PC Perspective on Thursday, September 25th at 4pm ET / 1pm PT to discuss the new Maxwell GPU, the GTX 980 and GTX 970, new features like Dynamic Super Resolution, MFAA, VXGI and more! You'll find it all on our PC Perspective Live! page on Monday but you can sign up for our "live stream mailing list" as well to get notified in advance!
NVIDIA Maxwell Live Stream
1pm PT / 4pm ET – September 25th
PC Perspective Live! Page
We also want your questions!! The easiest way to get them answered is to leave them for us here in the comments of this post. That will give us time to filter through the questions and get the answers you need from Tom. We'll take questions via the live chat and via Twitter (follow me @ryanshrout) during the event but often time there is a lot of noise to deal with.
So be sure to join us on Thursday afternoon!
UPDATE: We have confirmed at least a handful of prizes for those of you that tune into the live stream today. We'll giveaway an NVIDIA SHIELD as well as several of the brand new SLI LED bridges that were announced for sale this week!
I would like to know why
I would like to know why every GeForce driver works for some users and some have problems on almost every driver since Geforce Experience was introduced?
Its cause everyone has
Its cause everyone has different hardware, software, etc on their computer. Its impossible to for-see every little bug that will come up when. I bet if you check on AMD’s driver you get same thing.
I got a good question, Is
I got a good question, Is there gonna be some cards given away?
1) I am disappointed to find
1) I am disappointed to find out that GTX 980 &970 do not support Disport 1.3. Why was the decision made to go with DP 1.2 ?
2) With GTX 980 &970 using the DP 1.2 standard, what is the max frames for the cards ?
3) When will Nvidia produce a GTX card that supports DP 1.3 ? And will that card be able to support frame rates higher than 60 ?
4) For UHD, how much memory will be required per graphics card ?
5) I am not finding information for UHD that mentions 10 or 12 bit graphics. Will Nvidia cards support 10 or 12 bit requirements ?
6) What is needed to drive the UHD display for gaming or business apps when splitting the display into two or four areas? Will you need more than one graphics card to handle this ?
7) What display size is good when UHD display will be split into two, four, or more viewing areas?
Hi, I have a quick question
Hi, I have a quick question for tom when he comes in the stream relating to g-sync. I missed his live stream that was dedicated to g-sync a couple of weeks back.
My question is “Does nvidia plan on improving the firmware/feature set of g-sync over time and if so, will it be possible for consumers to update the firmware/software using DP?”
Thanks!
Recent generations of CPU’s,
Recent generations of CPU’s, GPU’s, SoC’s, etc. seem to be able to significantly reduce average power usage by aggressively dropping to low power states whenever possible, sometimes referred to as a “race to sleep”. To be more responsive, contemporary devices are also able to make these decisions hundreds of times a second. A side effect is that power use now has a greater dynamic range and can depend on the nature of the workload.
Does Maxwell enlist more granularity in clock and power domains to be able to power down idle portions of the GPU compared to Kepler?
Does the control logic have a greater capability to scale voltage and clock speeds in response to load, etc? (more accurately, more often, to a greater degree)
Also, it looks like perhaps the reference designs set a conservative power target while some AIB versions raise the target ceiling and allow Maxwell to run. Tom’s HW has some interesting insights into power usage of Maxwell specifically when it comes to GPGPU workloads. According to their GPGPU tests it looks like a non-reference Maxwell card can consume an extra 60-100W (that’s 33-55% more) compared to a reference card. During gaming, the same two cards are within about 10W of each other so this doesn’t appear to be due to differences in efficiency of power delivery or “golden” samples.
I have no desire to mine digital currency or virtually fold proteins all day, but has nVidia seen this sort of variance?
Do you consider this a “power virus” scenario?
Does this mean Maxwell technology will need to be carefully tuned if it is to be used in a successor to Tesla?
Thanks Ryan and crew for
Thanks Ryan and crew for taking the time to ask Tom one of my questions. I also appreciate the substantive response as well. The key part of Tom’s response for me was the insight that while gaming on Maxwell, performance is going to be limited more by clock speeds than running out of power.
I think this was actually demonstrated by the quick OC session with the GTX 980. The OSD showed that Maxwell was still below 100% power even after a 225 MHz bump. Upping the clocks even more while also adding voltage still only looked to hit about 105% power. Crysis 3 is just one game, but the demo was pretty clear that cards with big beefy coolers should easily be able to keep temps in check and allow the GPU to boost freely.
Why is the minimal refresh
Why is the minimal refresh rate of G-Sync is set to 30 hz and not 24 hz? It would provide better gaming experience in wider framerate range and would make G-Sync compatible with games like LA Noire and NFS Rivals that have framerate locked at 30 fps.
This was answered in the
This was answered in the previous Q&A. The GSync scaler will match the capabilities of the panel, so if the panel can go as low as 6Hz, the scaler will match that. Every GSync monitor goes through Nvidia’s QA process to make sure they work as advertised.
It was answered but that
It was answered but that wasn’t the answer. Below 30hz G-Sync is turned off and switched to V-Sync on.
Will DSR be made available to
Will DSR be made available to SLI users (as it currently only works with a single GPU)? Thanks to Tom, Ryan and the rest of the pcper team 🙂
More questions for Tom:
1) Is
More questions for Tom:
1) Is the Tegra chip in AUDI’s cars different from the one inside the Shield tablet? My limited knowledge of RTOS solutions informs me that usually the hardware for these sorts of jobs is custom-made for the purpose its designed for.
2) Is there headroom left in your colour compression technology for improvements in the future? (Simple yes or no)
3) Is your colour compression different from AMD’s? If so, is it superior, or are there many similarities?
4) Why is SLI disabled when attempting to run four cards on PCI-E 3.0 at 8x/8x/8x/4x? I know it’s bandwidth-related, but I haven’t seen a technical explanation as to why this is so, particularly when quad Crossfire runs the fourth card at 4x without complaining.
5) rAge Expo is happening next month in South Africa and a few vendors are showing off 120/144Hz monitors, there’s a 4K stand somewhere there, but nothing from Nvidia or any local partners on GSync. Can you help remedy this?
6) Are there any specific markets globally that have shown a greater interest in GSync than others?
1) Will Nvidia consider
1) Will Nvidia consider making a version of their LED SLI bridge in 2-way single space configuration? The slots on quite a few motherboards only allow a single empty slot between between the cards when in SLI.
2) Could the reference 970/980 run passively at idle? They idle at 10-15W so those big heatsinks should be able to easily handle the heat considering 65W TDP cards like the 630/640 are available in fanless models.
Does the GTX 980 finally
Does the GTX 980 finally allow true 30 bit color 4:4:4 over HDMI 2.0 at 1080p and 4K UHD resolution , so it is now the same as 10 Bit Color over display port on the Quadra Gpu at over one billion colors?
What memory config would you
What memory config would you recommend for surround 1440p and 4K gaming, if 4GB is the sweet spot for one 4K monitor, does that mean 12GB is required for optimal performance in surround?
I’ve heard a lot about
I’ve heard a lot about upcoming games that will support Mantle, but what gameworks optimized games can we expect in the near future? Thanks
I really like the updated
I really like the updated industrial design and the performance improvements brought with this generation GPUs, and can’t wait to see the 980Ti and Titan II hopefully with 512bit memory bus and DP 1.3. Also really like the new display output layout.
Is HDMI 2.0 backwards
Is HDMI 2.0 backwards compatible with existing cables and devices?
Are there anymore g-sync
Are there anymore g-sync monitors we can expect in the next month? Thanks
Are we nearing the end of the
Are we nearing the end of the lifetime of PCIe gen 3.0 lanes on motherboards or do we still have plenty of bandwidth capacity left for the next 5 years of GPUs.
How much overclocking
How much overclocking potential do we lose with the reference design 980? Is there any performance improvements with this iteration of the reference cooler? Thanks guys!
Could you let me know if the
Could you let me know if the 3DTV play software supports the JVC DLA-X7-B projector with its separate 3D emitter as I can’t find a compatibility list on the website?
My question:
With global
My question:
With global illumination, does it consider how reflective each surface is? If so, how?
Good job on lowering the
Good job on lowering the total amount of cores and the memory bus compared to 780ti and still get better and more power efficient performance..
What should we wait from the full gm chip?