While the headline of the GeForce 378.66 graphics driver release is support for For Honor, Halo Wars 2, and Sniper Elite 4, NVIDIA has snuck something major into the 378 branch: OpenCL 2.0 is now available for evaluation. (I double-checked 378.49 release notes and confirmed that this is new to 378.66.)
OpenCL 2.0 support is not complete yet, but at least NVIDIA is now clearly intending to roll it out to end-users. Among other benefits, OpenCL 2.0 allows kernels (think shaders) to, without the host intervening, enqueue work onto the GPU. This saves one (or more) round-trips to the CPU, especially in workloads where you don’t know which kernel will be required until you see the results of the previous run, like recursive sorting algorithms.
So yeah, that’s good, albeit you usually see big changes at the start of version branches.
Another major addition is Video SDK 8.0. This version allows 10- and 12-bit decoding of VP9 and HEVC video. So… yeah. Applications that want to accelerate video encoding or decoding can now hook up to NVIDIA GPUs for more codecs and features.
NVIDIA’s GeForce 378.66 drivers are available now.
VP9 10/12 bit decode support
VP9 10/12 bit decode support is limited to select Pascal chips
O rly??? Applications can now
O rly??? Applications can now hook up to which nVidia GPUs for VP9, Scott?
There’s a table down the page
There's a table down the page in the "is Video SDK 8.0" link. (I'll repeat the link here with an anchor to the appropriate table.)
I will break Wikipedia’s back
I will break Wikipedia’s back for this by withholding my $25 donation. The GTX 950, GTX 960, GTX 1050, GTX 1050 ti, and GTX 1060 support VP9. Anyone with a GTX 1070 or 1080 can run “DXVA Checker”, but I don’t think those two cards have VP9. The entry for VP9 support is “VP9_VLD_Profile0” and an additional entry for 10-bit support “VP9_VLD_10bit_Profile2”.
You’re a moron and it really shows, GP104 does support VP9_VLD_Profile0 and it’s right there.
+1 well said.
+1 well said.
I wonder, why can’t they make
I wonder, why can’t they make a decoder that can use CUDA to accelerate formats that are not natively supported. For example, in the past there was a CUDA accelerated encoder (badaboom media encoder) couldn’t they do something like that for for decoding additional formats?
Maybe they tried that and the
Maybe they tried that and the performance wasn’t good enough.
It’s most probably just not
It’s most probably just not supported at the hardware level and the software workaround they probably tried wasn’t good enough.
what do you think
what do you think nVidia CUVID API is for?
I guess Nvidia decided that
I guess Nvidia decided that CUDA is not enough to warranty their market share in the professional market, or maybe OpenCL 2.0 is catching up with CUDA? Deciding to support OpenCL 2.0 must have been a difficult decision for them. Nvidia fans can only hope that Nvidia is changing it’s attitude and support for the Adaptive Sync standard will follow in a future driver(yes I am jumping to probably wrong conclusions here, I just hope they will decide to change their business model now that it is obvious that they can’t lock the market to their own standards. AMD survived).
Nvidia don`t give a shit
Nvidia don`t give a shit about AMD, because AMD official said that they do not compete with Nvidia no more. Nvidia and Intel wants AMD to survive on the market becaus they need them to make more money after them. All this is because marketing… nothing more.
What are you talking about?
What are you talking about? AMD do not compete with Nvidia? Link? Intel might want AMD to survive, Nvidia, not so much. And guess what. Nvidia will be making more money if they end up a monopoly. They will have the future consoles, and the PCs.
Nvidia do not want the
Nvidia do not want the console market. Microsoft and PS want to work with Nvidia but they got other plans than console for games at that time and and refused this deal with conslole market. Monopoly it means they are too good and got no competition… you have not realized that yet ?!? LOL
That is the proof that AMD Radeon do not competing with Nvidia no more. Google it.
Nintendo Switch says, Hi!
Nintendo Switch says, Hi!
Well, yeah, they don’t want
Well, yeah, they don’t want consoles. They just let consoles to AMD, so that GCN can become the de facto standard architecture to optimize when developing games.
I really liked that LOL of yours. It is the proof that ignorance goes together with happiness.
nvidia eventually will have
nvidia eventually will have have to support OpenCL 2.0. it has nothing to do with CUDA being threaten by OpenCL an such. they just “delayed” them because it is not their main priority. it is the same as AMD did not put supporting latest OpenGL spec as quickly as they could. i still remember it took almost a year for AMD to actually support OpenGL 4.4 spec.
Why support OpenGL when you
Why support OpenGL when you are pushing a different API? Why support OpenGL when you know that the competition is much better at it?
Why support OpenCL 2.0 when you are pushing CUDA? Why make your cards better at something that the competition needs desperately? By making Nvidia cards better under OpenCL, you are giving an extra excuse to programmers to go that direction, instead of concentrating on CUDA.
even if nvidia strictly
even if nvidia strictly pushing for OpenCL it doesn’t mean that will going to help AMD. i mean OpenGL was also open like OpenCL was but why nvidia dominating the performance vs AMD? what happen to OpenGL could also happen to OpenCL. when it comes to OpenGL and AMD it is more like AMD does not want to put expensive effort in fixing the mess that started by ATI.
The fact that OpenGL and
The fact that OpenGL and OpenCL have similar names, doesn’t mean that they are similar, or that AMD’s cards will have the same bad performance on OpenCL as they have on OpenGL. Totally different APIs for totally different jobs.
If you anyway insist, just look at DirectX 11 and DirectX 12. Even more similar names, even more, both APIs are doing the same job, but under DX12, things are a little different compared to under DX11, when testing Nvidia and AMD cards. Is it not?
in case of DX11 vs DX12 it is
in case of DX11 vs DX12 it is not because of good or bad performance. in nvidia case their DX11 implementation is very good already that there is almost nothing to tap with DX12. in fact when going to DX12 developer have to actually compete with nvidia optimization instead.
for AMD going to DX12 solve some of their crucial problem with DX11. 1)much lower CPU overhead 2)they need async compute before they can access their ACE hardware to increase utilization problem they had with their architecture design. when you see 8.6tflops gpu cannot significantly beat 5.6tflops gpu then there is something wrong with it.
The next thing you will tell
The next thing you will tell me is that a software solution can be faster than a hardware one.
Nvidia will only have a
Nvidia will only have a monopoly for discrete cards. Intel will still be the market share leader for all graphics. Unless they do an AMD (Ryzen) and jettison the iGPU to add more cores.
If AMD does go down, Intel will probably step up and make their own discrete card.
that will not going to
that will not going to happen. intel needs AMD to watch their back for anti trust stuff. but if AMD really did go into VIA route they might as well took RTG from AMD. it is the story that Kyle from [H] has been cooking since last year.