This year saw the 40th anniversary of (the availability of) the world’s first microprocessor- the Intel 4004 processor- and Intel is as strong as ever. On the supercomputing and HPC (High Performance Computing) front, Intel processors are powering the majority of the Top 500 supercomputers, and at this years supercomputing conference (SC11) the company talked about their current and future high performance silicon. Mainly, Intel talked about its new Intel Xeon E5 family of processors and the new Many Integrated Cores Knights Corner Larrabee successor.

The Intel Xeon E5 is available now.

The new Xeon chips are launching now and should be widely available within the first half of 2012. Several (lucky) supercomputing centers have already gotten their hands on the new chips and are now powering 10 systems on the Top 500 list where the 20,000 Xeon E5 CPUs are delivering a combined 3.4 Petaflops.

According to benchmarks, Intel is expecting a respectable 70% performance increase on HPC workloads versus the previous generation Xeon 5600 CPUs. Further Intel stated that the new E5 silicon is capable of as much as a 2x increase in raw FLOPS performance, according to Linpack benchmarks.

Intel is reporting that demand for the initial production run chips is “approximately 20 times greater than previous generation processors.” Rajeeb Hazra, the General Manager of Technical Computing of Intel’s Datacenenter and Connected Systems Group, stated that “customer acceptance of the Intel Xeon E5 processor has exceeded our expectations and is driving the fastest debut on the TOP 500 list of any processor in Intel’s history.” The company further reiterated several supercomputers that are set to go online son and will be powered by the new E5 CPUs including the 10 Petaflops Stampede computer at the Texas Advanced Computing Center and the 1 Petaflops Pleiades expansion for NASA.

While Intel processors are powering the majority of the world’s fastest supercomputers, graphics card hardware and GPGPU software has started to make its way into quite a few supercomputers as powerful companion processors that can greatly outperform a similar number of traditional CPUs (assuming the software can take advantage of the GPU hardware of course). In response to this, Intel has been working on it’s own MIC (Many Integrated Core) solution for a few years now. Starting with Larrabee, then Knights Ferry, and now Knights Corner, Intel has been working on silicon that using numerous small processing cores that can use the X86 instruction set to power highly parallel applications. Examples given by Intel as useful applications for their Many Integrated Core hardware includes weather modeling, tomography, and protein folding.

Knights Corner is the company’s latest iteration of MIC hardware, and is the first hardware that is commercially available. Knights Corner is capable of delivering more than 1 Teraflops of double precision floating point performance. Hazra stated that “having this performance now in a single chip based on Intel MIC architecture is a milestone that will once again be etched into HPC history” much like Intel’s first Teraflop supercomputer that utilized 9,680 Pentium Pro CPUs in 1997.

What’s interesting about Knights Corner lies in the ability of the hardware to run existing applications without porting to alternative programing languages like Nvidia’s CUDA or AMD’s Stream GPU languages. That is not to say that the hardware itself is not interesting, however. Knights Corner will be produced using Intel’s Tri-Gate transistors on a 22nm manufacturing process, and will feature “more than 50 cores.” Unlike current GPGPU solutions, the Knights Corner hardware is fully accessible and can be programmed as if the card is it’s own HPC node running a Linux based operating system.

More information on the Knights Corner architecture can be found here. I think it will be interesting to see how well Knights Corner will be adopted for high performance workloads versus graphics cards from Nvidia and AMD, especially now that the industry has already begun adapting GPGPU solutions using such programming technologies like CUDA, and graphics cards are becoming more general purpose (or at least less specialized) in hardware design. Is Intel too late for the (supercomputing market adoption) party, or just in time? What do you think?