NVIDIA has traditionally been very interested in acquiring room in the high-performance computing for scientific research market. For a lot of functions, having a fast and highly parallel processor saves time and money compared to having a traditional computer crunch away or having to book time with one of the world’s relatively few supercomputers. Despite the raw performance of a GPU, adequate development tools are required to bring the simulation or calculation into a functional program to execute on said GPU. NVIDIA is said to have had a strong lead with their CUDA platform for quite some time; that lead will likely continue with releases the size of this one.
What does a tuned up GPU purr like? Cuda cuda cuda cuda cuda.
The most recent release, CUDA 4.1, has three main features:
- A visual profiler to point out common mistakes and optimizations and to provide instructions which detail how to alter your code to increase your performance
- A new compiler which is based on the LLVM infrastructure, making good on their promise to open the CUDA platform to other architectures — both software and hardware
- New image and signal processing functions for their NVIDIA Performance Primitives (NPP) library, relieving developers the need to create their own versions or license a proprietary library
The three features, as NVIDIA describes them in their press release, are listed below.
New Visual Profiler – Easiest path to performance optimization
The new Visual Profiler makes it easy for developers at all experience levels to optimize their code for maximum performance. Featuring automated performance analysis and an expert guidance system that delivers step-by-step optimization suggestions, the Visual Profiler identifies application performance bottlenecks and recommends actions, with links to the optimization guides. Using the new Visual Profiler, performance bottlenecks are easily identified and actionable.
LLVM Compiler – Instant 10 percent increase in application performance
LLVM is a widely-used open-source compiler infrastructure featuring a modular design that makes it easy to add support for new programming languages and processor architectures. Using the new LLVM-based CUDA compiler, developers can achieve up to 10 percent additional performance gains on existing GPU-accelerated applications with a simple recompile. In addition, LLVM’s modular design allows third-party software tool developers to provide a custom LLVM solution for non-NVIDIA processor architectures, enabling CUDA applications to run across NVIDIA GPUs, as well as those from other vendors.
New Image, Signal Processing Library Functions – "Drop-in" Acceleration with NPP Library
NVIDIA has doubled the size of its NPP library, with the addition of hundreds of new image and signal processing functions. This enables virtually any developer using image or signal processing algorithms to easily gain the benefit of GPU acceleration, with the simple addition of library calls into their application. The updated NPP library can be used for a wide variety of image and signal processing algorithms, ranging from basic filtering to advanced workflows.
Does this mean we could see
Does this mean we could see more support for GPU accelerated video encoding? I have noticed that with GPU video encoding, although quick, still lacks quality compared to the multi threaded CPU h.264 solutions.
No It doesn’t, CUDA is not
No It doesn’t, CUDA is not for this purpose, It’s for doing work. Ya know like using the video card of cuda capable cards for Boinc projects like Seti@Home, Einstein@Home, etc, etc…
Actually it is for more than
Actually it is for more than just that — and can be used for video transcoding and such. The point is to offload processing to the videocard where it makes sense. Tim Sweeney of Epic was even talking for a while now about having game engines written in CUDA-like GPGPU languages where the branching logic/etc. stuff was allocated to the CPU and the parallel/etc. stuff was allocated to the GPU.
Such a game engine would look very close to the simpler software rendering engines before DirectX and OpenGL took over. Some engines will actually stop using DirectX and OpenGL as a result of this at some point… to not be tied down by arbitrary caveats and limitations of the APIs.
But that’s not really all that soon, and that’s also not every engine… mostly just ones like UnrealEngine where the developers want to REALLY control everything.
“A new compiler which is
“A new compiler which is based on the LLVM infrastructure, making good on their promise to open the CUDA platform to other architectures — both software and hardware”
Does that mean that it will run on other brands of video cards? I’d never heard of LLVM before, but I know that one of the general complaints against CUDA is that because it’s not open and only runs on 1/2 (approximately) of technology (Nvidia vs. AMD/ATI etc.), that it lacked widespread adoption?
Per my understandimg, the
Per my understandimg, the short answer is no, CUDA om AMD is not happening. Maybe some day.
I would even say probably
I would even say probably NEVER. Why? Other options for programming to GPUs will take off and leave CUDA and its proprietary platform behind. We are already seeing it with OpenCL and C++ AMP.
I just want some new NVIDIA
I just want some new NVIDIA cards, already. It’s the only thing that is putting off building a new machine to replace my two year old machine.