The open-source driver for Intel is known to be a little behind on Linux. Because Intel does not provide as much support as they should, the driver still does not support OpenGL 4.0, although that is changing. One large chunk of that API is support for tessellation, which comes from DirectX 11, and recent patches are adding it for supported hardware. Proprietary drivers exist, at least for some platforms, but they have their own issues.
According to the Phoronix article, once the driver succeeds in supporting OpenGL 4.0, it will not be too long to open the path to 4.2. Tessellation is a huge hurdle, partially because it involves adding two whole shading stages to the rendering pipeline. Broadwell GPUs were recently added, but a patch that was committed yesterday will expand that to Ivy Bridge and Haswell. On Windows, Intel is far ahead — pushing OpenGL 4.4 for Skylake-based graphics, although that platform only has proprietary drivers. AMD and NVIDIA are up to OpenGL 4.5, which is the latest version.
While all of this is happening, Valve is working on an open-source Vulkan driver for Intel on Linux. This API will be released adjacent to OpenGL, and is built for high-performance graphics and compute. (Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things.) As nice as it would be to get high-end OpenGL support, especially for developers who want a more simplified structure to communicate to GPUs with, Vulkan will probably be the API that matters most for high-end video games. But again, that only applies to games that are developed for it.
Small mistake above where you
Small mistake above where you use OpenCL and probably meant to use OpenGL.
As for Intel and Linux I had thought they were doing better with their Linux support than AMD. A bit surprised to find them so far behind, but it’s good to see them working on it because even when Vulkan is released people will still build programs for OpenGL.
No. “Note that OpenCL is more
No. "Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things" refers to OpenCL. While Vulkan could be used for things like video encode, scientific calculations, etc., you lose features like HSA (at least in the initial Vulkan release). It should be more sophisticated than OpenGL compute shaders across the board, though, especially with the command queue model allowing devs to target multiple, unrelated GPUs.
I think it’s LunarG which is
I think it’s LunarG which is making that Vulkan linux driver for intel and Valve is mainly sponsoring it.
It’s hard to tell how it’s
It's hard to tell how it's structured internally. It's possible that Valve was making the driver, and LunarG is making the SDK… or there could be cross-over, etc. I probably should give them credit somewhere, though…
Actually Michael did a wrap
Actually Michael did a wrap up about LunarG reddit Ask-Me-Anything in march which make me think it’s LunarG:s made driver:
http://www.phoronix.com/scan.php?page=news_item&px=LunarG-Vulkan-AMA
https://www.reddit.com/r/IAmA/comments/2ypils/we_are_lunarg_funded_by_valve_to_improve_steamos/
WOW SCOTT, you are pumping
WOW SCOTT, you are pumping out the articles this holiday season……… year end quota?
Heh, no. This is roughly my
Heh, no. This is roughly my normal publishing rate.
Not worried, Intel can throw
Not worried, Intel can throw more engineers at a problem than AMD and nvidia combined several times over.
Exactly! IF it becomes
Exactly! IF it becomes financially worthwhile to have top notch Linux drivers, specifically meaning “a scenario where a significant (to our massive bottom line) number of customers will chose a different manufacturer due to Linux compatibility,” then, AND ONLY THEN will Intel throw it’s army of intelligence at the problem, probably bang out perfection in a week, and move on. AT THIS POINT Intel knows quite well that feeding a segmented market is great for consumers NOT producers.
I’m not too sure about that.
I'm not too sure about that. If you look at Larabee's development, Intel threw billions at trying to make an x86-based GPU. It didn't come together because it seems like GPU development is sensitive to development time and involvement in standards, not money.
On the other hand, Intel seems to take DirectX 12 and Vulkan seriously. It would be interesting to see whether Intel tosses blank cheques to get into the high-end GPU market, once benchmarks focus on games that are developed under the new generation of APIs. They certainly have a solid background in IC design and writing compilers for their hardware.
It seems like DX12/Vulkan may
It seems like DX12/Vulkan may make architectures like Xeon Phi better able to handle graphics workloads due to the asynchronous compute abilities. I don’t know if Intel uses similar processing elements between Xeon Phi and their graphics cores. They may share some design elements since the memory system would have similar constraints. I have never seen anything detailing the compute organization of Intel graphics cores. We generally see information on the organization of compute cores in AMD and Nvidia units though.
That’s asynchronous compute
That’s asynchronous compute abilities on the GPU and not on the CPU as much for DX12/Vulkan, and AMD has fully in hardware asynchronous compute abilities on its GCN GPUs. For the CPU it is more limited multiprocessing ability, but CPUs lack the massive numbers of CUs/EUs that GPUs have and VR gaming will need that asynchronous compute ability available fully in the GPU’s hardware for responsive VR gaming.
CPU’s do have asynchronous compute abilities, but CPUs lack the massive numbers of CUs/EUs that the GPUs provide for gaming, and now also compute/gaming compute on the GPU’s cores via the DX12 and Vulkan graphics APIs! A CPU can never efficiently run games without the help of a GPU, and now even GPUs are running compute that used to be done on the CPU, so expect the CPU to become less of a factor in gaming performance once the DX12 and Vulkan graphics APIs are more in use by the games/gaming engine makers.
They weren’t talking about
They weren't talking about normal CPUs. They were talking about Xeon Phi, which have many cores, each with ultra-wide registers. One 512-bit register acts as a 16-wide FP32 warp/wavefront.
They use the same assumptions that GPUs use to get higher throughput per clock. Neighbouring tasks run in lockstep so less transistors is needed per item. AMD does 64, NVIDIA does 32, Intel's AVX-512, as I've said earlier, does 16 for single-precision. You get the idea. A 72-core Knight's Landing Xeon Phi is equivalent to a couple thousand CUDA cores in throughput, ~a Titan X (~6 TFLOP). Granted, it does that at 14nm when NVIDIA's GPU is 28nm. NVIDIA also has a few fixed-function ASICs on the GPU that Intel doesn't.
That also said, Intel's part probably does certain things better than NVIDIA's… we just don't know what that exactly is yet. Well, we know it's faster at FP64. Knight's Landing 64-bit performance is 1/2 FP32. Titan X FP64 is 1/32 FP32.
Xeon Phi’s lack the
Xeon Phi’s lack the tessellation units, ROP’s and other hardware based graphics capabilities! A “couple thousand CUDA cores” does not equal Nvidia’s or AMD’s middle to high end core count either! The Xeon Phi is only able to do FP on those AVX units, so any other graphics workloads will have to be implemented in software, and not be as efficient as having the dedicated graphics units that GPUs have.
The Phi will also have to be clocked higher, and we know what that does for power usage/efficiency. Most Certainly the Phi will be able to run x86 based code better than the GPUs(can’t run X86 based code), but for Graphics and Number crunching also the Phi is going to be behind against the professional level GPU accelerators, and the consumer variants coming online for 2016! Just wait for the new GPUs on the 14nm/16nm process node, and GPUs are designed using high density design libraries so at the 14nm/16nm process node there will be a lot more circuity packed into a unit area of a GPU than Intel’s ATOM based core’s on the Xeon Phi which are done up on a high power Low density design library normally used for CPU layouts to be run at a higher clock speed.
Intel will be charging a premium for the Phi, while in 2016 most consumers will be able to purchase a Greenland/Arctic Islands, or Pascal based flagship that could put the Xeon Phi to shame for FP compute workloads at about a third of the cost of Intel’s SKU. GPUs make up for their lower clocked processors by having way more FP/Int and other unit counts relative to even the Xeon Phi’s AVX units. And let’s see that Xeon Phi equal the available bandwidth on the 2016 based HBM2 using 2016 offerings from Nvidia, and AMD, or even AMD’s Fury GPUs with their HBM1 memory.
GPUs have much wider on die memory fabrics than even the Phi can provide with its 2 cores per tile sharing of on die connection fabrics. Just try and get that Phi running some gaming graphics with the processing done in software compared to the GPU’s dedicated graphics hardware and watch those ATOM cores in the Xeon Phi become I/O bound trying to fetch and execute the code from memory while the GPU will be doing the same on its hardware without having to repeatedly fetch instructions for software based execution, the GPU has most of its graphics functionality implemented in hardware and microcode on each of the specific graphics units and execution units, while the Phi will be clogged up with memory accesses trying to fill those cache miss requests on the mountains of code it takes to implement a ROP, SP, tessellation and other GPU functionality in software.
That Phi’s SIMD instruction is not exactly a warp or wavefront that can dispatch actual varied instructions to many instruction units like a GPU does, as each Atom core can only have a limited number of processor threads running relative to a GPU which can have thousands of different processor threads in operation at any one time doing more than just FP math like the AVX instructions on the Xeon Phi. GPUs have a lot more warps or wavefronts operating in parallel on my types of instructions and not just SIMD instructions only.
Edit: my types
To: many types
Edit: my types
To: many types
Okay, there’s quite a few
Okay, there's quite a few things in here.
The first one that catches my eye is, "And let's see that Xeon Phi equal the available bandwidth on the 2016 based HBM2 using 2016 offerings from Nvidia, and AMD, or even AMD's Fury GPUs with their HBM1 memory."
Xeon Phi does use a sister of HBM, called MCDRAM. Knight's Landing uses four stacks of 4GB, 16GB total. There's disagreement over how much bandwidth it has, but it sounds about on par with Fury X (~500GB/s).
Second, yes, NVIDIA and AMD have a bunch of fixed function hardware on their GPUs that Knight's Landing doesn't. That said, Knight's Landing has tricks of its own. We don't know what that is yet (apart from high FP64 performance). Some of those might be useful for graphics. Intel is kind-of tighted lipped about these things. Also, it's unclear how much, or how easily, Intel could modify Xeon Phi to be a fully-functional GPU. It might be a lot, but I wouldn't write it off.
Third, I have no idea what you're talking about regarding Phi's SIMD vs. a warp/wavefront. I've done OpenCL development and, within a warp, I can't really think of anything that couldn't be done on a SIMD register, given an effective compiler. Even branch-divergence would be possible if the SIMD registers were allowed to ignore operations selectively within a register.
On the cost standpoint… yes, Xeon Phi is expensive. Teslas, Quadros, and FirePros are expensive, too. It doesn't mean that Intel cannot make consumer SKUs based on the research they've done with Xeon Phi.
I feel like you are comparing
I feel like you are comparing apples and trash compactors. Creating, especially in phisical objects (hardware) is HARD!!!!! tunein, updating, tweek/twerk/twonking a excising product ESPECIALLY SOFTWARE is simply a matter of human hours.
I believe we’re saying the
I believe we're saying the same thing. Intel cannot just dump money at the issue. It helps to acquire the best engineers, a lot of them, for a long time, but Intel was sitting on their butts selling GMA for a decade.
During this time, AMD and NVIDIA were grinding a huge lead. Adding engineers won't just make this problem go away. They've tried. These engineers need to work and work and work to solve these problems, and do so within standards that they barely had a token involvement in. Once (if) they catch up, then Intel's "I will spend the equivalent of your revenue in R&D" philosophy could flip the industry… if they want to bother.
It looks like its damn near
It looks like its damn near impossible to find an AMD based laptop the comes with a Linux OS build factory installed. I’m looking at the Lenovo Y700 at BB, but I’m also looking online to see if anyone has installed a Linux OS on the Carrizo FX8800P based Y700 gaming laptop with its integrated and discrete mobile AMD GCN graphics. Currently there appears to be very little in the way of AMD based Steam Machines, and hopefully Vulkan will make it better for those looking for AMD based Gaming/Graphics systems in 2016. Intel definitely has the majority of the Linux based laptop market, with Nvidia having most of the Linux based discrete mobile GPUs that go into the Linux based laptops. I’d like to have a laptop without windows, Intel, or Nvidia’s products inside!
I wish Phronox could get the Lenovo Y700(FX8800P) that is currently a BB exclusive, and see if they could get a Linux distro up and running before I can decide whether purchase the laptop.
It is annoying the way a lot
It is annoying the way a lot of these systems are shipped. I would generally want to get a full, clean version of Windows with the laptop if I was going to get Windows. I would often want to set up a dual boot system and I would rather have a full version of Windows rather than a system recovery disk or system recovery partition.
I have been looking at a boutique builder because most laptops from large OEMs are rather disappointing, and often do not allow upgrading any components. A builder named Mythlogic offers builds based on Clevo chassis with many configurable options. I have not purchased from them, so I have no idea how good they are. I think I saw a review of one of their systems on Anandtech a few years ago though. They will sell a laptop with several versions of Windows, Ubuntu, or no OS at all.
I guess those aren’t AMD
I guess those aren’t AMD based though. I wasn’t even considering going AMD for mobile until they get their new parts on a smaller process (<28 nm) out. An AMD APU is a good option for a low cost system, but these generally lack configurability options available in high-end systems.
I just need information about
I just need information about getting Linux running on The FX8800P based Lenovo Y700, I’d buy one and wipe the windows 10, and install Linux. The Y700 is the first gaming laptop SKU using the Carrizo FX8800P with its latest GCN integrated graphics and discrete mobile GCN graphics, and I can not understand why one of the Linux based websites have not attempted loading and getting Linux running on the Y700! If any laptop is a canditate for some expermients it’s this one. So Phoronx or even the Debian/Steam OS developers should use this SKU for testing. Lenovo should know that some people will not be using windows 10, and at least should have other OS options availabe for its AMD APU based SKUs!