UPDATE (Nov 26th, 3:30pm ET): A few readers have mentioned that FPGAs take much less than hours to reprogram. I even received an email last night that claims FPGAs can be reprogrammed in "well under a second." This differs from the sources I've read when I was reading up on their OpenCL capabilities (for potential evolutions of projects) back in ~2013. That said, multiple sources, including one who claim to have personal experience with FPGAs, say that it's not the case. Also, I've never used an FPGA myself — again, I was just researching them to see where some GPU-based projects could go.
Designing integrated circuits, as I've said a few times, is basically a game. You have a blank canvas that you can etch complexity into. The amount of “complexity” depends on your fabrication process, how big your chip is, the intended power, and so forth. Performance depends on how you use the complexity to compute actual tasks. If you know something special about your workload, you can optimize your circuit to do more with less. CPUs are designed to do basically anything, while GPUs assume similar tasks can be run together. If you will only ever run a single program, you can even bake some or all of its source code into hardware called an “application-specific integrated circuit” (ASIC), which is often used for video decoding, rasterizing geometry, and so forth.
This is an old Atom back when Intel was partnered with Altera for custom chips.
FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. Changing tasks requires a significant amount of time (sometimes hours) but it is easier than reconfiguring an ASIC, which involves removing it from your system, throwing it in the trash, and printing a new one. FPGAs are not quite as efficient as a dedicated ASIC, but it's about as close as you can get without translating the actual source code directly into a circuit.
Intel, after purchasing FPGA manufacturer, Altera, will integrate their technology into Xeons in Q1 2016. This will be useful to offload specific tasks that dominate a server's total workload. According to PC World, they will be integrated as a two-chip package, where both the CPU and FPGA can access the same cache. I'm not sure what form of heterogeneous memory architecture that Intel is using, but this would be a great example of a part that could benefit from in-place acceleration. You could imagine a simple function being baked into the FPGA to, I don't know, process large videos in very specific ways without expensive copies.
Again, this is not a consumer product, and may never be. Reprogramming an FPGA can take hours, and I can't think of too many situations where consumers will trade off hours of time to switch tasks with high performance. Then again, it just takes one person to think of a great application for it to take off.
“Reprogramming an FPGA can
“Reprogramming an FPGA can take hours”. Reprogramming modern FPGA’s (even big ones) is usually done in under a minute. The part that takes hours is synthesizing and mapping HDL code (verilog and VHDL) to a specific fpga chip.
Oh? So could you pre-bake
Oh? So could you pre-bake versions and blast them fairly quickly at, for instance, app startup?
Complex FPGAs have ‘code’ in
Complex FPGAs have ‘code’ in external memory, let say I2C and are programmed at power up. Eg Stratix 5 with Cortex M3 and other circuitry programmed that way takes cca 20 seconds.This is similar to downloading Java app over Internet, it takes days/months to develop app but seconds to deploy.
So everybody is talking about
So everybody is talking about FPGA’s just because Intel is doing it, and FPGAs have been around for a while now! Intel is not the only company using FPGAs, or with plans to include FPGAs on server SKUs programmed to do specific server/other tasks! IBM will be doing so, ditto for AMD. FPGA’s are still not as power efficient as ASICs.
There are also the eASIC which appears to bridge some gaps between FPGAa and ASICs. It looks like Intel is intrested in eASICs also.
“Is it an ASIC? Is it an FPGA? No, it’s eASIC!”
Article from eetimes:
Yes, what anonymous said is
Yes, what anonymous said is true. They logic synthasis phase takes hours, but dumping the code (generally caled a bitstream) into the FPGA takes very little time. Traditionally the FPGA will load itself from a serial memory on the board, but many can be loaded over a parallel interface much more quickly, so, yes, you could ‘context switch’ the FPGA quickly.
You probably wouldn’t want to do so because the types of things that FPGAs are good at have very long pipelines, so the big cost would be the flush/fill of those pipelines.
So, yes to the pre-baking and yes to the blasting at app startup.
nice article man, love this
nice article man, love this stuff.
….that is a swastika. What
….that is a swastika. What the F…
That is a Nazi swastika…
That is a Nazi swastika…
is this even legal????? nazi
is this even legal????? nazi wtf
I really hope that it’s just
I really hope that it’s just some massive mistake and a case of extremely bad taste on Intel’s R&D designers team side, but we really can’t exclude the possibility that there are some radical Nazi-sympathizers working at that company also. This is very VERY bad. It obviously won’t stop me from buying and using Intel’s products in the future, but I’ll obviously will be feeling extremely uncomfortable from now on when I’ll be buying their products. That’s just naaaaazty.
And yet all I see is a Hindu
And yet all I see is a Hindu or Buddhist symbol for auspiciousness. People need to seriously realise that the symbol pre-dates the Nazi’s by a long time and is still used outside of the Western world in its traditional sense.
Dont pay attention to these
Dont pay attention to these fools, in general americans are the idiots who have certain pre-concived views which they think are correct and everyone who disagrees is an illiterate fool. Swastika is just one such example, they dont care that it was a symbol whose meaning was distorted by the Nazi and get past that.
Nice try, scrub.
1. I’m not a
Nice try, scrub.
1. I’m not a USA/Canada citizen nor I’m any “American”.
2. Read what I’ve said below. You’re making a fool out of yourself here, in all actuality.
I know perfectly fine that
I know perfectly fine that original swastika has nothing to do with Nazis. The problem being is that differently angled swastika means different things and Nazis did it the way just like it’s being imprinted on that processor. Again – this all might be just one big mistake of a case of extremely poor taste on Intel’s designers-side, until I’ll get more info on this I’m not actually accusing of Intel being a Nazi-supporting company straight away. But this is quite nazty nonetheless.
He’s trolling, don’t take the
He’s trolling, don’t take the bait
Please stop producing so much
Please stop producing so much sheer autism, thank you.
I’ve seen some Godwin-ing in
I've seen some Godwin-ing in my day but that is just impressive. Methinks they are looking to hard for something to be offended by.
What baffles me the most with
What baffles me the most with this, is the fact that this SOMEHOW actually got final approval, was taped out, and was released. What the actual F, Intel. This smells extremely fishy.
Depending on how fast they
Depending on how fast they can get the programming, it would be interesting to eventually have a growing list applications where when they are actively being used, the OS and drivers can automatically switch between a number of FPGA presets. For example, if an application is launched which will transcode their anime collection, then the FPGA will automatically reprogram its self to accelerate the encoding.
As their first set of code for the FPGA, Intel should release something that accelerates the performance of notepad. It will also be cool to see something that is masterfully tuned and optimized to run the “Hello world” application (a version which can take advantage of 40+ threads in addition to the FPGA simultaneously)