There are a lot of colloquialisms tossed about such as AI research and machine learning which refer to the work being done designing neural nets by feeding in huge amounts of data to an architecture capable of forming and weighting connections in an attempt to create a system capable of processing that input in a meaningful way.  You might be familiar with some of the more famous experiments such as Google's Deep Dream and Wolfram's Language
Image Identification Project
.  As you might expect this takes a huge amount of computational power and NVIDA has just announced the Tesla M40 accelerator card for training deep neural nets.  It is fairly low powered at 50-75W of draw and NVIDIA claims it will be able to deal with five times more simultaneous video streams than previous products.  Along with this comes Hyperscale Suite software, specifically designed to work on the new hardware which Jen-Hsun Huang comments on over at The Inquirer.  

At the end of the presentation he also mentioned the tiny Jetson TX1 SoC.  It has 256-core Maxwell GPU capable of 1TFLOPS, a 64-bit ARM A57 CPU, 4GB of memory and communicates via Ethernet or Wi-Fi all on a card 50x87mm (2×3.4)" in size.  It will be available at $300 when released some time early next year.

"Machine learning is the grand computational challenge of our generation. We created the Tesla hyperscale accelerator line to give machine learning a 10X boost. The time and cost savings to data centres will be significant."

Here is some more Tech News from around the web:

Tech Talk