The end-game idea is for a developer to be able to write generic code that is not aimed at EITHER a CPU or a GPU but rather will allow for the proper OpenCL implementation to determine what computing is best done on which available processors. So if a workload would be best run on a GPU, and one is available in the system, the software would utilize it; but if a section of code is very linear and best suited for a traditional processor, it would run on the x86 cores. At least, that’s the idea. It sounds easy when written out in such a fashion but in practice will be very difficult to accomplish without a lot of precompiled work by the developer.
AMD demoed Havok Cloth running with OpenCL quite some time ago
In fact, NVIDIA has demonstrated its CUDA API working on both CPUs and GPUs – using the same exact compiled code. So we know this is something that CAN be done – it is more a matter of how efficiently these context changes can take place and be implemented on a larger scale.
Larrabee is no doubt another kink in this whole OpenCL story as it spans the gap between traditional GPUs and CPUs by offering a many core x86 design with specific vertex instruction optimizations. And of course, there is Apple, one of the progenitors of the OpenCL standard with full support for it coming in Apple’s soon to be releaesd Snow Leapord operating system. And now that each and every one of Apple’s computers has an OpenCL capable GPU in them, it could make for some interesting paradigm shifts in the coming months and years for computing and programming.