For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.
Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.
It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.
At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene. It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.
The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world… or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.
It could be cool, but it has its trade-offs, like anything else.
Most real-time ray-tracing
Most real-time ray-tracing demos and games end up looking like CGI from the 90s. I understand the capabilities of what it could bring in terms of realism, but the amount of processing power we need to make it look good at reasonable frame rates just seems beyond our capabilities at the moment… Please prove otherwise!
Ray tracing does a lot of
Ray tracing does a lot of thing automatically, but it still isn’t necessarily that easy to get good results. Even something like an automobile isn’t really a simple surface to model accurately. It does take a huge amount of processing to get realistic looking results, although the performance is mostly going to be held back by the memory system. We have plenty of processing power on current GPUs. The problem is, you need fast, random access to the entire scene which really can’t be done with any memory architecture. Some new memory architectures (like HBM) may be interesting, but these are not designed to offer low latency. Low latency is much harder problem to solve than high bandwidth, so solutions will continue to be those that can be solved by high bandwidth rather than low latency.
HBM is not designed to offer
HBM is not designed to offer low latency, but it does end up reducing latency (relative to GDDR5). GPUs also play several tricks, like pulling and parking entire tasks when they are waiting for memory access, using the work group in the mean time.
As for the initial comment about it "looking like CGI from the 90s"… that seems to be mostly due to programmer / tech demo art than the fundamental technology. Ray tracing doesn't really change what is drawn, it basically just provides a different set of data to the shader.
We don’t have “plenty” of
We don’t have “plenty” of processing on the GPU though. We regularly use all of what is available. You might want to look at how much processing power it took to render the movie “Monsters University” via ray-tracing.
I don’t know where to start with your memory comment either, but let’s be clear.. if your GPU is running at 100% then making your memory faster doesn’t matter.
You imply that using different memory would somehow, magically, make a graphics card way better for ray-tracing which is simply not true.
(I’m not sure you grasp the difference between high bandwidth and low latency either, but regardless we have BUFFERS for processing data on the GPU when latency issues create delays getting new data to the GPU. It’s the same concept as hyperthreading in combination with the low level caches minimizes the time the CPU is left idle.. if the processor is idle 5% of the time then it can only speed up processing by 5% if you could magically keep it fully active)
ray-tracing, like dynamic
ray-tracing, like dynamic lighting doesn’t have to be a “one stop” solution. You might use it for only a portion of the tasks, or use it fully for tasks that aren’t done in real time (not everything is created for gamers BTW).
It would be nice if
It would be nice if Imagination Technologies would show this technology working on a graphics tablet/graphics applications instead of them only showing it working for games, Ray tracing takes a lot of CPU power, and for Tablet based graphics applications if the ray calculations can be done/accelerated on dedicated GPU hardware then things will be much better for tablet users, and even laptop users!
The gaming usage for this is not going to be able to show as much even with the hardware available from Imagination Technologies to assist games, but for graphics applications having the units on the GPU able to do ray tracing will definitely make this technology more attractive! Gaming Graphics sacrifices most of the image fidelity in order to maintain playable frame rates. So while the Ray Tracing functional blocks help for some gaming reflection effects and such, a better usage to demonstrate would be using the Ray Tracing hardware to accelerate the on the GPU ray tracing hardware for graphics applications.
If more can be done on the GPU(with dedicated ray tracing GPU hardware, or OpenCL ray tracing acceleration on the GPU) then those hours long CPU based RAY renderings can be reduced to minutes. That one Zombie game is not really going to highlight Imagination Technologies’ hardware ray tracing technologies to the fullest extent, so they maybe should do more graphics arts demos and stress the time savings for rendering workloads while using the Ray tracing in the GPU’s Hardware to accelerate graphics application rendering.
If ray tracing could easily
If ray tracing could easily and effectively be accelerated on the GPU, then it would have been done a long time ago. You don’t see any ray tracing cards or even ray tracing GPU renderers for games. I don’t know if computer animation studios are even using specialized hardware. The problem with ray tracing isn’t necessarily the amount of processors you can throw at it, it is limited by the memory system. Casting rays results in random access throughout the entire scene. For any scene of realistic size, this will not be cacheable.
There have been hardware ray tracing chips made, but they still could not produce high enough performance to compete with rasterizing GPUs. This hardware could be used for essentially specialized shaders to compute effects (reflections, shadows, refractions, etc) on a limited amount of surfaces. We are not going to see full scene ray tracing with this small amount of specialized hardware. I doubt that this would be that useful for non-realtime ray tracing applications compared to throwing a large number of CPUs at the problem. Ray tracing done for movies is going to use even larger scenes with more complicated surfaces which takes a lot of memory.
AMD demonstrated Ray Tracing
AMD demonstrated Ray Tracing accelerated on the GPU using OpenCL at SIGGRAPH 2014!(1) So you are very much ill informed, and you did not read the post that you replied to as it was requesting more non real-time graphics demos!!!
So the post was more about non real-time Ray Acceleration for Blende/Other 2D/3D graphics software taking advandage of the dedicated hardware for IT’s GPU ray tracing in hardware for non gaming graphics uses!
(1)
http://www.develop3d.com/blog/2014/08/amd-previews-opencl-ray-trace-renderer-on-maya-FirePro-Autodesk
And AMD’s info about the Firerays libraries and usage!(2) P.S. the AMD software is open for all to use and it’s cross OS platform enabled so it will work on Windows/OSX/Linux!
(2)
http://developer.amd.com/community/blog/2015/08/14/amd-firerays-library/
I was responding to some
I was responding to some specifics in the original post, not everything!!!!! Yes, you can accelerate ray tracing on a GPU!!!! Is it going to compete with rasterization, no!!!!
It’s not about competing with
It’s not about competing with rasterization for some speed rally, it’s about the Ray Tracing ability to produce the most natural results with shadows in shadows, reflections in reflections and other realistic effects that can only be done with the simulated rays in the Ray Tracing algorithms! There is no way rasterization can provide the realistic lighting and subsurface scattering effects that Ray Tracing can produce. So it’s not about some match with a definite winner, it’s about having the rendering done in minutes on the GPU and not hours on the CPU! CPUs are the biggest time thieves in the Graphics Industry while GPUs are what gets the work done quickly in a massively parallel fashion!
Intel’s core i7s and even AMD’s Zen SKUs when they arrive are not up to the task of affordably putting thousands of processor cores to work doing the Ray Tracing calculations, and Imagination Technologies’ dedicated Ray Tracing technology is even better with dedicated hardware functional blocks engineered for ray tracing workloads. Those AMD GCN ACE units should make quick work of the ray tracing workloads saving the graphics application users many hours of delays waiting for some very slow renderings that come from being dependent on the CPU for rendering tasks! Imgination Technologies is on to something good, but they need to market the technology to the Graphics non-gaming graphics market just as hard as they market to the gaming market! If all GPU makers had dedicated ray tracing hardware integrated on some of their GPU SKUs then the dependency on the overpriced and unsuited for the task CPU for graphics could be ended once and for all.
It was a little too smooth,
It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material.
They are as glossy as they should be for a car paint … they simply didn’t implement Fresnel effect. It looks weird because reflection magnitude shouldn’t be equal for all angles of incidence.
… huh. That’s correct, and
… huh. That's correct, and I now better understand some confusion I had with 3D materials. Thanks!
They were not doing any
They were not doing any intensive ambient occlusion in that zombie game and also not doing sub-surface scattering, as AO is very compute intensive also. But you can’t beat AO and ray tracing for subtle effects, but that is mostly for non real time rendering workloads. Fresnel effect is also something that needs compute resources. Still I wish one of Imagination Technologies’ customers(Apple) would design a laptop discrete mobile GPU around this ray tracing hardware technology for Graphics Uses. There are some more demos that look better using this GPU ray tracing hardware, so I guess it depends on the game and the time the developers put into their work. This is all new technology so there will be improvments hopefully.
We are now at the same point
We are now at the same point with ray tracing that we were with pixel shaders just before NVidia became a major player in the GPU market. At that time 3DFX had faster fixed point rendering but NVidia’s visuals looked better. In the end, NVidia won that battle and 3DFX is no more. Ray tracing is a similar trade off. Scanline renderers are faster but ray tracing has several innate advantages. It is much easier for a developer to create photorealistic visuals with ray tracing than with fragment shaders. In fact it can be done at a high level with little custom coding needed. That means it is more suitable for Angry Birds on a smartphone than Call of Duty on a PC. Ray tracing also scales better than scan line rendering. Ray tracing becomes more efficient the more geometry you throw at it. The fact that this Imagination Tech GPU even exists now when ray tracing was considered impractical for GPUs just a few years ago shows that we are nearing the tipping point when ray tracing becomes faster than scanline rendering for complex scenes. Finally, ray tracing can just plain do more than scanline rendering. You don’t need endless tricks and hacks to get a great looking image you just need physically based materials and lighting in a well defined scene file.