If you are using a 1080p monitor or perhaps even outputting to a large 1080p TV, there is no point in picking up a $500+ GPU as you will not be using the majority of its capabilities. Phoronix has just done research on what GPU offers you the best value for gaming at that resolution, putting five AMD GPUs from the Radeon R9 270X to the R9 Fury and six NVIDIA cards ranging from the GTX 950 to a GTX TITAN X into their test bench. The TITAN X is a bit of overkill, unless somehow your display is capable of 200+ fps. When you look at frames per second per dollar the GTX 950 came out on top, providing playable frame rates at a very low cost. These results may change as AMD's Linux driver improves but for now NVIDIA is the way to go for those who game on Linux.
"Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics."
Here are some more Graphics Card articles from around the web:
- AMD R9 Nano @ Kitguru
- AMD Radeon R9 Nano @ Hardwareheaven
- AMD Radeon R9 Nano @ Legion Hardware
- The AMD R9 Nano Performance Review @ Hardware Canucks
- Asus R9 390X STRIX DC3 OC 8GB @ Kitguru
- Asus ROG Poseidon Platinum GTX 980 Ti Review @ Bjorn3d
They will have to do that
They will have to do that test again when Vulkan is released, and more asynchronous compute ability is utilized in games/gaming engines. Including benchmarks that actually add up the latency produced between CPU and GPU for GPUs that can not run more of the asynchronous compute tasks fully on the GPUs hardware. In the future, if the code can be run on the GPU independently of the CPU’s help or interaction then there will be less overall latency penalties for games. Valve will be seriously working with Khronos to get Vulkan included as soon as possible, then it will be time to do the benchmarking again.
Even with Vulcan i would not
Even with Vulcan i would not be surprised if nvidia still come out on top. Also the existing games on linux most likely not going to be port to Vulcan. Nvidia already mention day 1 driver release of Vulcan (as to continue their current OpenGL commitment where they will release drivers for the latest OpenGL spec by the time the spec being announced by Kronos Group).
For YEARS when it comes To Linux gaming Nvidia is the choice if you really want to game on Linux.
But Nvidia does not have the
But Nvidia does not have the hardware based asynchronous compute resources in its GPU hardware to support all the Vulkan(just mantle with the Vulkan name) API’s ability to make use of asynchronous compute ability, same for DX12, so what is Nvidia going to do when those Vulkan benchmarks say otherwise. OpenGL is not going to be used for the new Linux gaming software, as the Games developers will have Vulkan, and will use Vulkan for its ability to use asynchronous compute on the GPU, same as the games makers will be switching to DX12 for its Mantle based code that takes advantage of hardware asynchronous compute resources. OpenGL will be for legacy support and on older games, but the games/games engine makers will want that extra hardware asynchronous compute ability on AMDs GCN ACE units, and games graphics, and other game functionality running/accelerated on the GPU’s hardware. The VR gaming folks are saying hardware asynchronous compute ability is what is needed for VR gaming.
Nvidia’s GPUs will have to rely more on the CPU, and slower software to mimic the faster in hardware asynchronous compute resources that AMD’s GCN ACE units provide! Nvidia will have to rely on the extra latency adding interaction and support with the motherboard CPU to schedule its simulated more sudo in software (not hardware) asynchronous compute abilities, while AMDs GCN ACE units have that done in the GPUs hardware asynchronous compute on the ACE units without the need for any latency inducing CPU interactions for thread scheduling and other context switching that AMD’s ACE units can do on their own with AMD’s GPU hardware asynchronous compute engines. Vulkan is mostly Mantle, DX12 is mostly mantle, Nvidia will have much greater reliance on the CPU, and every time the CPU needs to send over work that work will have to be encoded/encapsulated into PCIe protocol, a latency inducing step, and then sent to the GPU, and be decoded/DE-encapsulated(more latency) for the GPU to work on, while AMDs GPUs will be able to independently of any CPU interactions do compute on its ACE units.
Those ACE units on AMD’s GPUs have more of the logical abilities in their GPU hardware normally associated with those of CPUs on AMD’s ACE units for managing independent GPU processing Threads, and context switching those GPU processing threads. Should one GPU processing thread need to wait for a dependency before continuing to process some results that are dependent on other results, the hardware can halt the GPU processing thread that is waiting for some dependent data, so that processing thread can be immediately halted and another thread switched in and worked on to keep AMD’s ACE units running at as close to maximum efficiency. Nvidia’s GPU have to wait for draw calls to complete before trying to manage the threads in a more simulated in software non fully hardware based form of GPU thread processing management. Much slower and less responsive with more GPU resources idle dew to LACK of efficient hardware thread management in Nvidia GPU’s hardware. Nvidia does not have Hardware asynchronous compute ability in its GPUs.
Async compute bla bla bla.
Async compute bla bla bla. Let me ask you this: just because DX12 exist then all game developer going to abandon DX11 entirely? Async Compute not even mandatory feature of DX12. So with the arrival of Vulcan did you think dev going to ditch OpenGL entirely? Go look at android for example. Hardware for OpenGL ES 3.0 already in the market for quite sometime. Android have support for OpenGL ES 3.0 since 4.3. Now how many games actually use OpenGL ES 3.0?
And if you think dev going to jump into Vulcan and say bye bye to OpenGL for Linux gaming then you seriously know nothing about what happen on Linux gaming world and let me give you some hint: dev for sure have no problem supporting vulcan in linux if they want to.the problem is somewhere else.
Imagination Technologies the
Imagination Technologies the designer/licenser of the PowerVR GPU is already doing Vulkan demonstration on its GPU hardware, and Imagination Technologies is all I with GPU compute on its GPUs. So expect that OpenGL ES will be available of the legacy gaming stuff but the games developers will be all in with Vulkan, and GPU asynchronous compute. AMD, ARM(Mali), Imagination Technologies(PowerVR), Qualcomm and others are all in with asynchronous compute on their GPUs! There are more cores for compute and graphics on the GPUs that are on any CPU for GPUs used for number crunching and gaming! Expect that all newer games will use Vulkan! Nvidia’s GPU compute gimping days are over! Linux gaming and a big Linus one finger Bird salute to the M$ and Nvidia monopolies!!!!!
So did you have actual prove
So did you have actual prove that show Power VR gpu or the likes of Snapdragon have ACE in their hardware? As some console dev have said ACE is pretty much hardware that only exist in AMD GCN right now. I don’t know why did you guys make assumption that since power vr gpu capable of running Vulcan then it must have something similar to AMD GCN ACE. I’ll give you the reminder: Async compute is not mandatory part of DX12 and Vulcan. Yes Vulcan is somewhat carbon copy of Mantle but Khronos specifically mention they only take parts that can be applied to all gpu vendor and take away anything that specific to AMD hardware only.
Also as i said the problem is not the hardware. By the time the problem got sort out (if they ever will) nvidia hardware AND software most likely have the solution for Async Compute ready in place. Linus can give all his middle finger to nvidia all day but he is not the one that giving working drivers for Linux gamer.
The PowerVR GPU has its own
The PowerVR GPU has its own version of Asynchronous compute and it even has virtualization hardware, and dedicated Ray Tracing hardware for the latest PowerVR GPU options. Imagination Technologies(PowerVR’s maker) is a founding member of the HSA foundation along with AMD, ARM Holdings(Mali), Qualcomm and others, And Qualcomm is even through its API allowing its on board SOC DSP to be used by applications. HSA is more about using all the processing devices on a SOC/APU/discrete GPU for compute/graphics/other workloads. And even Nvidia has some Vulkan demos out there, it’s just that Nvidia’s hardware support for Asynchronous compute is lacking, and not as efficient as true hardware based Asynchronous compute on the GPU. Nvidia has the engineering resources to improve, but Nvidia is more interested in segmenting its compute to its higher priced pro SKUs, while AMD is bringing true Hardware Asynchronous compute to the consumer, and AMD’s APUs and GPUs will have the ability to offload onto the GPU’s hardware more compute oriented workloads in addition to graphics workloads. All the HSA foundation’s member ship are working towards implementing the HSA 1.0 specification/standard. Go and Look at the HSA foundation’s membership roster, and see the industry CPU/SOC/GPU makers that are in with the HSA standards. Why would anyone in their right mind not want to be able to utilize all those massive ranks of FP/Int/Other units on GPUs to accelerate any compute workloads, and not just graphics workloads, games utilize more than just graphics workloads to process the games, and who would not want more of the game engine code to run on the GPU on those massive amounts of raw computing resources, where any latency issues could be made the lowest possible.
Valve will make damn sure that when Vulkan is released to the market, that Steam OS, and The Steam game library will make use of Vulkan, and its not just Valve, its the games companies, and the Linux community, these benchmarks are being run on what will soon become legacy graphics APIs, and not the newest that will be hear by the end of 2015. so naturally Khronos will support its graphics software/API stack for some years to come, but the devices manufacturers are chomping at the bit to get at all the GPUs/other processing unit’s processing power, in addition to the CPU’s limited by comparison processing powers in those devices SOC./APUs.
You enjoy your worship of Nvidia’s Monopoly market share, but it will only net you is more reduced true GPU hardware based computing ability in the future, as AMD and the PC, laptop and mobile markets move more in the direction of the HSA foundations goals of making use of GPUs/other processors for any and all types of compute workloads.
Radeon R9 290X say hello.
Radeon R9 290X say hello. It’s DP was limited to 1/8 FP32 performance while the firepro version of Hawaii DP was rated at 1/2 FP32. So did you think only nvidia gimp their consumer product so they can sell the professional lineup for much higher price?
You keep saying Async this HSA that compute this and API that while fail to look at the core problem. This has nothing to do with Nvidia monopoly at all. Just how reality are. You know what? They said give it TWO years. TWO years that OpenCL will annihilate CUDA completely. That was back in 2009. Now fast forward to 2015. Did OpenCL kill CUDA yet?
And if Async compute really that important nvidia will adopt them. They still have sometime before Pascal comes out. Fermi initially doesn’t have tessellation engine either. It was added on later stage. Still doesn’t stop nvidia to beat AMD implementation on their first try.
Nvidia will be forced to
Nvidia will be forced to adopt more hardware Asynchronous compute in its hardware by the market forces that will cost Nvidia business if they do not stop that gimping, and those AMD R9-290X where used for bit-coin mining and still have more FP relative to Nvidia’s gimped consumer SKUs. AMD has not been reducing FP like Nvidia has on its consumer lines. Nvidia is just a product segmenting market milking monopoly. It’s just more Nvidia classic Monopoly Lock-in for its users, lock stock and barrel, to Nvidia’s gimped ecosystem, where less compute cost much more. It’s more BOHICA from Nvidia to its users!
seriously FP and bit coin?
seriously FP and bit coin? the coin mining has nothing to do with FP performance at all. it is just that the mining stuff favors how AMD architecture work. nvidia fix that with maxwell. and it has much much more gimp FP (FP64) compared to kepler.
“AMD has not been reducing FP like Nvidia has on its consumer lines”
they did. starting with Hawaii.
http://www.anandtech.com/show/7927/amd-launches-firepro-w9100
http://www.anandtech.com/show/7457/the-radeon-r9-290x-review/18
“Meanwhile double precision performance also regresses, though here we have a good idea why. With DP performance on 290X being 1/8 FP32 as opposed to ¼ on 280X, this is a benchmark 290X can’t win.”
that is fine proof for you there that AMD also gimp their consumer card to sell professional lineup. doesn’t matter it is much less than nvidia. but main point is they also did it.
Both companies make Pro SKUs,
Both companies make Pro SKUs, but compare Nvidia’s FP gimping on its consumer SKUs and AMD’s consumer GPU SKUs have more FP(SP/DP), and hardware ACE resources. Nvidia better get off its Green Goblin A$$ and get more Hardware asynchronous compute ability.
This power using Metric for power savings has gone far enough if it only results in the removing of more FP/other resources from GPU cores, and the gimping of true hardware based asynchronous compute ability, let both AMD and Nvidia get the power savings from fab process node shrinks, and power gating the GPU’s core units when they not in use, or needed, but more hardware asynchronous compute ability needs to be added to GPUs in the Future to get more of the game code running on the GPU’s cores and games running on the GPUs’ massive numbers of GPU cores/ACE units, because CPUs are not there when it come to raw computing power for gaming code/gaming graphics like GPUs have. Consumer GPUs for VR gaming are going to need all that hardware asynchronous compute ability running the games/graphics code on the GPU, and doing away with all that CPU way down there on the motherboard latency adding issues.
P.S. Future interposer based AMD gaming APUs whether on a PCI card based discrete solution, or on a motherboard based solution will have the ability to link CPU with the GPU via the interposer such that none of the current CPU via PCI to discrete GPU based solutions will be able to compete. The interposer will allow for 10s of thousands of parallel traces between CPU cores and GPUs, HBM, and other chips on the interposer. The future powerful gaming APUs will be derived from those AMD HPC/workstation SKUs, so expect that maybe those discrete graphics cards from AMD will include a few CPU cores to go along with all the GPUs ACE units and make latency even less of an issue for future gaming systems. Those VR based games are going to need the lowest latencies possible, same for 4k, and 8k gaming!
They will when the market
They will when the market says they need them. The market ask for power efficiency, nvidia did them with kepler. You know what they say back then? All the compute stuff that nvidia include in their gpu is USELESS to gaming at all and they accuse nvidia forcing gamer to pay the cost for that useless hardware inside nvidia gpu. Now who ask nvidia to gimp compute again? It was no other than consumer themselves.
If we hope only node shrink to reduce power consumption then we will stuck with performance. And going down the node going to be harder from now on. We stuck with 28nm for 3years. The next node might be even longer. Look what happen when AMD brute force to increase performance with increasing power consumption. They want 390X/390 to match/exceed nvidia 980/970 so they increase clock and add more power to the card. You you look average power consumption of 290X vs 390X the increase was very big. Even on average 390X power consumption of 390X can easily reach 350w mark vs 250w-260w on 290X. And there are case of 390 owner that their 8pin power connector was burn because the card try to draw too much power from the 8pin.
You can get that power
You can get that power efficiency better by properly gating the ACE units like AMD has done for its GPUs when the GPU cores are not needed, and not by removing hardware resources and Nvidia Green Goblin Gimping the compute resources completely out of the GPU and giving the user less gaming/GPGPU compute value for their dollars. DX12 and Vulkan will make use of AMDs more robust HARDWARE Asynchronous Computing resources, and AMD will have more GPU hardware resources and even better fine grained power gating on its Arctic Islands SKUs, Same for AMD’s HSA foundation partners with their GPU Asynchronous Computing solutions.
We are talking about GPU architectures and hardware features, and you are talking about overclocking GPUs. AMD’s Fury line, and future lines of products will continue to be developed for power efficiency and not only for reducing GPU compute just like Nvidia is, just for getting simply more dies on a chip wafer Nvidia Green Goblin Gimping style! I would rather have my GPUs cores with more Asynchronous hardware and FP/other resources that can be intelligently power gated like AMD’s, than Nvidia’s Green Goblin Gimped GPUs that are not so good with the latest graphics APIs. Those AMD GPGPU ACE resources that you are calling “USELESS” resources are being utilized in DX12, and Vulkan, so the CPU is having to do less of the gaming work, and the less gaming work done on the CPU results in less total amount of CPU to GPU latency issues because more of the game can be run on those GPU’s GPGPU ACE units!
The new Graphics APIs are making good use of all those extra ACE hardware compute resources on AMD’s GPUs and that just irks the knee-jerk Green Goblin Gimping Nvidia apologist that you are. STOP THE GIMPING Nvidia!!!
Give actual data point done
Give actual data point done by reviewer that using ACE can reduce power consumption. Else you were just imagining things and hoping it to be true. Me Nvidia apologist? Hahaha just because i said things that you don’t want to hear then i’m nvidia apologist?
Btw Fiji with it’s DP further crippled than Hawaii says hi to you.
Async compute bla bla bla.
Async compute bla bla bla. Let me ask you this: just because DX12 exist then all game developer going to abandon DX11 entirely? Async Compute not even mandatory feature of DX12. So with the arrival of Vulcan did you think dev going to ditch OpenGL entirely? Go look at android for example. Hardware for OpenGL ES 3.0 already in the market for quite sometime. Android have support for OpenGL ES 3.0 since 4.3. Now how many games actually use OpenGL ES 3.0?
And if you think dev going to jump into Vulcan and say bye bye to OpenGL for Linux gaming then you seriously know nothing about what happen on Linux gaming world and let me give you some hint: dev for sure have no problem supporting vulcan in linux if they want to.the problem is somewhere else.
Why would you say a $500 gpu
Why would you say a $500 gpu is unneeded at 1080p? That makes no sense. I have my gaming rig hooked up to my 54″ TV in my home theater. It’s a powerful machine, including a GTX 980. The TV is only 1080p, but even with my 980 gpu, games like The Witcher 3 at Ultra detail just barely maintain 60 fps. A slower card, such as the ones you recommend, would come nowhere near 60 fps without lowering the quality settings. But why lower the quality settings? If you’re willing to do that, why not just get a gaming console at that point?
You play the Witcher 3 at
You play the Witcher 3 at Ultra in Linux ?!?!?!?!
Of course not, I don’t
Of course not, I don’t believe The Witcher 3 is even released for Linux yet. 🙂 But in my experience, Linux and Mac versions of game almost always perform more poorly than their Windows equivalents, so even if Witcher 3 was released on Linux, I’d still boot into Windows to do my gaming.
But OS choices aside, my original comment still stands: How is a top-tier video card too much for 1080p gaming? Modern AAA games, at their maximum detail settings, *still* stress top-tier GPUs and can have a difficult time reaching 60 fps. And again, why would I want to turn down detail settings and get a lower-end card? If I have a tight budget, sure, that makes sense. But having a tight budget is not the same as saying top-tier GPUs are overkill for 1080p gaming.
I should also comment that
I should also comment that Phoronix doesn’t mention what quality settings they ran the games at. (Am I totally missing that?) To obtain those high framerates, are they turning down the details substantially? Or are the Linux versions of those games not as intensive of their Windows equivalents?
Well it depends. Sometimes
Well it depends. Sometimes the linux version of the game have visual downgrades compared to th e windows version. One example is Metro (i think it was the redux version?). The linux version only using OpenGL 3.3 which i think equivalent to DX10 feature at best.
If i remember correctly CiV:BE probably the first game that use OpenGL 4.x. And when the game actually close to release on linux Aspyr Media (company that usually do the port on linux and Mac) mention that they might need to drop support for Intel and AMD graphic because their drivers still have no proper support for OpenGL 4.x in linux.
Sometimes it is not that dev purposely dumb down the graphic for linux it is that majority of the hardware does not have proper software for it.
Open source lovers love
Open source lovers love closed source drivers 🙂
Then can you hope for open
Then can you hope for open source dev to match the speed of binary driver development?
That’s just because the
That’s just because the Nvidia Monopoly is more interested in vendor lock-in than its customer’s ability to use their GPU products as they see fit with the customer’s OS of choice. Do you enjoy your status as a monopoly’s tool, I guess you are a big fan of Comcast, M$, and the other monopolies that try milk for more profits the consumer while trying to reduce the customer’s viewing/internet/gaming/compute experience. Enjoy your vendor lock-in and reduced hardware based asynchronous compute ability, Say hello to more Green Goblin gimping away of even more compute resources from the Big Green Slime.
Lol. This has nothing to do
Lol. This has nothing to do with monopoly or vendor locking. And don’t bring about compute bla bla bla here because it is not about that.
Mindless Git, Yes MAN. Bend
Mindless Git, Yes MAN. Bend over and tell Nvidia to give you another royal one, you like that repeating of Nvidia’s gimping year-in year-out of overall GPU performence! 3.5 = 4, and the other Green Goblin gimping of FP/other units.
Games use compute as much as Graphics compute, and the games makers of the future need all those ACE units that they can get their hands on, on the GPU and with the lowest latency!
you really are stupid. bring
you really are stupid. bring in all those compute bla bla bla when we not talking about stuff. the guy said “Open Source lovers love Close Sorce stuff”. well in reality i was not really like that. binary will always ahead because the drivers will come from the company itself. while open source drivers are made by community. not that binary is bad but they want something that is truly open. so if you try to look it at one angle what they try to do is somehow trying to reinvent the wheel. they what to replace the binary with something they work on themselves. if they somehow able to do it then they might be able to understand how the architecture work and in the future when the officially support is long death community can do something about it like making it work with future version of linux because they know how the hardware work. but it needs very big effort. and they need to know the architecture detail which some might be confidential. some hardware vendor might not have problem releasing stuff for their old hardware but for their newest one they will not going to release them that quick.
All drivers(or any software)
All drivers(or any software) come from source code, and are compiled to binaries(executables), it’s just that Nvidia will not release its driver source code, or the full software/hardware technical programming listings for its GPUs. Nvidia is looking for lock-in and forcing customer dependency on its products. So “Binary” being “best” does not mean what you think it does, and all code generated by a compiler comes from source code, and that source code is compiled into binary code/machine language, or Intermediate language cross platform pseudo binaries that are JIT(Just in time), or ahead of time compiled into the hardware’s native assembly/machine language code(Binary/.EXE) code and run on the hardware.
So it’s more about the Linux driver developers looking for the proper hardware/software and code documentation so the Linux driver developers can write their own Nvidia drivers. Looking at Nvidia’s source code is one way for the Linux driver developers to get that code ported over for the Linux kernel, but Nvidia will not allow that or provide the needed technical documentation. With Khronos Groups Vulkan though there will be a cross platform abstraction layer that will allow for more close to the metal performance with any underlying hardware abstracted away but still allow for games makers to do things more HSA style, with closer to the metal features to allow for better gaming and to also take advantage of hardware asynchronous GPU compute.
Hiding ISA and features are not going to reveal that much about a GPU makers underlying proprietary hardware implementations anyways, so Nvidia/others are still going to be suspected of the Vendor lock-in for not providing the Linux community with the proper GPU ISA manuals so the Linux driver makers can get the job done. There are some features in the Vulkan API that allow for proprietary hiding while still allowing access to the hardware, but still the companies that allow for the Linux community to have the most necessary programming information about their hardware are the ones that will have the best Linux/Steam OS based steam box sales. Valve will be able to do more Linux kernel patching with its Steam OS builds, and the Games makers, and Valve are/will be putting a lot of effort into using Vulkan as quickly as possible, and getting legacy games converted over to that new Vulkan graphics API. Khronos will be maintaining and still improving its older APIs for legacy purposes, and to give enough time for transitioning over for games developers.
Nvidia is known for its Gsync, and other methods of proprietary API/Driver lock-in, but gaming is going to become even more open sourced based in the future, especially where OSs are concerned. As game developers do not want to be under the complete control of any proprietary OS maker’s walled garden OS API ecosystem, or 30% off of the top of any gaming sales revenues. The independent games makers have a hard enough time making ends meet as it is.
So AMD did pretty much access
So AMD did pretty much access to their gpu then why open source driver still cannot make perform AMD card (especially the newer one like fury) to it’s true potential? Even if nvidia give all away their source code or technical documentation will open source driver the day the hardware launch? Heck so far that’s not even happenning even with AMD. Mist people just want to use their hardware right now not years later. But i understand the benefit of having open source driver.
Proprietary or open both have their place. If not for nvidia pushing for Gsync AMD would not botther with Freesync despite kmowing the hardware was there all along. Don’t like Gsync? Then just use FS. And even that will still going to lock you to AMD gpu right now.
Drivers for hardware are one
Drivers for hardware are one thing that has zero need for being open source.
Unless your hardware vendor
Unless your hardware vendor likes lock-in, and then those Open Source drivers will be of great help. Also Open source drivers mean that even the older hardware will get updated support from the community long after the device’s marker moves on to the newer hardware, and abandons supporting the old.
Do tell me what vendor lock
Do tell me what vendor lock have to do with Open Source drivers?
Vulkan is coming online and
Vulkan is coming online and the entire graphics software/driver stack is going to be different. And there is no FS(“Free Sync”), there is Only VESA Display Port Adaptive Sync(TM). Vulkan is AMD’s promised open sourcing of Mantle. Expect that once Vulkan comes online that there will be a greater movement towards getting gaming onto Steam OS and the Linux drivers supported by Valve, the gaming industry, and the Linux community those open source Linux drivers will be supported like never before!
AMD has taken the Mantel project internal, with any new Mantle improvements going directly into Vulkan. Vulkan is just Mantle at the surface and all the way down! Expect that AMD’s HARDWARE ACE units will be hosting more of the gaming code that used to be done on the CPU, with a whole lot less latency, just ask the VR games makers.
Amd can suggest new thing for
Amd can suggest new thing for vulcan but it doesn’t mean what ever they want will be accepted just like that. Every member in Khronos group have the right to suggest their implementation. That’s why initially AMD want Mantle to continue to exist with current API (Richard Huddy even talk about Mantle 2.0 will be come out when MS officially launch DX12 and going to be better than what DX12 can do). Because in the hand of Khronos Group AMD will not be able to control the direction of the API freely.
120fps on Metro Last light
120fps on Metro Last light redux on a gtx 950!! The linux drivers cant be THAT much better. What settings are they running the benchmark at??
http://www.phoronix.com/scan.
http://www.phoronix.com/scan.php?page=news_item&px=MTc3NjA
I did some quick checking and it was Metro Last Light that use OGL 3.2 and not Redux. Nvidia opengl drivers on linux is quite on par to their windows drivers so it is not surprising even 950 seems faster in last light (linux version) when they only using much more limited graphical effect than the on used in windows version.
Did they also mention how AMD
Did they also mention how AMD Linux drivers suck balls? My SteamOS machine has awful crackling sound.