Some familiar scenery
This time we have the latest NUC on the test bench that features a faster Broadwell CPU and Iris graphics!
If you thought that Intel was going to slow down on its iteration in the SFF (small form factor) system design, you were sadly mistaken. It was February when Intel first sent us a NUC based on Broadwell, an iterative upgrade over a couple of generations for this very small platform, 4" x 4", that showed proved to be interesting from a technology stand point but didn't shift expectations of the puck-sized PC business.
Today we are looking at yet another NUC, also using a Broadwell processor, though this time the CPU is running quite a bit faster, with Intel Iris 6100 graphics and a noticeably higher TDP. The Core i7-5557U is still a dual-core / HyperThreaded processor but it increases base and Turbo clocks by wide margins, offering as much as 35% better CPU performance and mainstream gaming performance boosts in the same range. This doesn't mean the NUC 5i7RYH will overtake your custom built desktop but it does make it a lot more palatable for everyday PC users.
Oh, and we have an NVMe PCI Express SSD inside this beast as well. (Waaaaaa??)
Even though we have seen more than our fair share of Intel NUC systems we still need to take a look around this device to see if anything has changed. Here are the quick specs of the DIY mini PC.
Intel NUC5i7RYH Specifications | |
---|---|
Processor | Intel Core i7-5557U Dual-core, HyperThreaded (3.1 GHz Base, 3.4 GHz Turbo) |
Motherboard | Custom |
Memory | Dual Channel DDR3L SODIMMs (empty) |
Graphics Card | Intel Iris Graphics 6100 |
Storage | Internal support for M.2 SSD (AHCI, NVMe) Internal SATA 6.0 Gbps 2.5-in HDD/SSD 9.5mm |
Networking | Intel Pro Gigabit Ethernet Intel 7265 802.11ac Bluetooth 4.0 Intel Wireless Display Support |
Power Supply | 19V, 65 watt wall adapter |
Connections | 4 x USB 3.0 2 x Internal USB 2.0 header 1 x Mini HDMI 1.4a 1 x Mini DisplayPort 1.2 |
Enclosure | 155mm x 111mm x 48.7mm |
For a couple of generations now the Intel NUC systems have included two variants - a thinner and thicker option, the latter of which can support a 2.5-in hard drive or solid state drive. The design otherwise is pretty much the same - a silver exterior with a plastic black top that can be removed and replaced. My one gripe about the lid is that it appears to be easily scratched so avoid placing your phone or keys or anything else on it.
On the front we find a pair of USB 3.0 ports, the yellow colored one supports fast charging while the system is powered down. The 3.5mm plug on the right hand side is for both headphones and microphone connections.
Rotating around to the back of the NUC you find the remaining connectivity for this model. That includes two display output options (mini HDMI and mini DisplayPort) as well as two additional USB 3.0 ports and a Gigabit Ethernet connection. The power input is on the left hand side and along the top is space for the small fan on the processor heatsink to ventilate.
The NUC5i7RYH is a taller variant of the NUC chassis (seen on the left, NUC5i5RYK on the right) that allows for the installation of a 2.5-in hard drive or SSD. As of this writing there isn't any indication that a Core i7 version with the slimmer design is incoming, possibly due to the added TDP of the Core i7 processor.
Of course you can still remove the top plastic cover of the NUC and replace it with either a design of your own or one of the upcoming add-ons that Intel has been talking about since CES in January. There is both power and data connectivity through the top of the design and that should allow for devices like NFC readers, pico projectors or more unique technology options to expand on the design of this SFF system.
For a little flair, you can create your own 3D printed NUC lid, assuming you have access to such a device. Thanks goes out to Matt C. for sending along this custom built PC Perspective top for our NUCs!
A low end Nvidia card, 740
A low end Nvidia card, 740 for example, and an 7800 APU would have been nice for a quick comparison in those charts. Skylake’s GT4e would go up, as much as 50% (in favorable scenarios I guess) so adding a GT card and an APU could show us how much behind is Intel today, if they are behind, and how much they would close that gap tomorrow with Skylake’s top GPU.
Intel’s Iris graphics already
Intel’s Iris graphics already run into the same bandwidth bottleneck AMD ran into with Kaveri. Adding more SPs is pointless until that’s resolved, and that’s partly why the Iris Pro SKU with 128MB of eDRAM exists, to partially alleviate this bottleneck.
Another missed opportunity
Another missed opportunity for AMD. Now they would need to come up with APU supporting GDDR (or even HBM) memory in order to rule SFF market.
Missed opportunity? Not
Missed opportunity? Not really. They offer good products for little boxes like the NUCs, but those products don’t come with “Intel inside” logos. Intel’s brand is too strong. Intel also is very good in convincing others to not complicate their product catalogs with too much hardware from competitors.
About GDDR and HBM, Carrizo does come with color compression like Tonga. It will be interesting, especially when we start seeing DX12 games.
hate it
anyone can buy micro
hate it
anyone can buy micro atx fm2+ with a kaveri or the new apu line
way more cheaper and better
good review guys (y)
784, uh yea micro FM2 please
784, uh yea micro FM2 please could have one of those built under 200.00
Yes the Fitlet PC people have
Yes the Fitlet PC people have some nice AMD based products with some specialized expansion options. Hopefully AMD will get a Carrizo inside some mini boxes, I’d like to have one with a Desktop SKU, I’m tired of laptop SKUs being shoved into these devices and then being called a desktop system. When Zen gets here AMD could clean up with this mini form factor systems and its graphics, but even Carrizo’s cores are going to be helped out with AMD’s graphics/HSA, and the DX 12, and Vulkan graphics APIs. Intel’s graphics is still not there compared to AMD’s and Nvidia’s offerings.
Intel made mountains of those dual core i7, and now that the graphics APIs have caught up with multicore usage it’s going to be as many cores as possible for gaming. Even if the game itself is not using all of the cores all of the time for some gaming, windows is definitely going to need some of those cores for its bloat to run on and not affect the overall system performance. The more cores/processor threads the better.
AMD needs to look at getting into the mini from factor market with a Zen based APU as soon as possible, it’s the one area that still has growth potential among all the various PC markets out there.
You vastly overestimate game
You vastly overestimate game programmers’ abilities to move up to multi-core when they haven’t been able to make a multi-core physics engine for more than a decade despite the existence of OpenMP which makes it easy as pie to do. You have to remember that caliber of programmer is three or four tiers lower than the people who do parallel programming for a living.
Games programmers are not
Games programmers are not system programmers, it’s the job of the gaming engine system programmers to abstract, or make things easy for the games programmers/script kiddies. No one expects the games programmers to be system software engineers. The new graphics APIs, and other functionality will be there, as well as the SDKs and libraries provided by the SOC makers for their specific SOC, or discrete GPU hardware. The Gaming engine designers alongside the hardware manufacturers, especially AMD, have been providing the necessary libraries for the gaming engine developers to take advantage of the hardware’s resources, so maybe some gaming companies are just lazy, or cash poor, greedy etc. Parallel programming is most definitely not in the skill set of the average gaming “Programmer” but neither is OS system development, and yet everybody utilizes the OS, and its libraries with no expectations that the majority of the worlds programmers will ever understand the workings of even the most rudimentary OSs.
I’ll tell you this all this multiprocessing and parallel programming functionality should have been part of the OS since the first multicore microprocessors were introduced into the marketplace. All OSs should have been HSA aware, a long time ago, and able to utilize the GPU for physics calculations, and other general purpose workloads. The OS makers have only themselves to blame, spending billions on junk UIs, and useless app store runtimes in which to resell the same functionality in poorly done App code, to milk more profits from the computer’s users.
Why the hell did not M$ have any multi-adaptor functionality in DX whatever, or its OS, ever since there where discrete GPUs introduced in the marketplace, and why did M$ not require this of all the hardware that was on any system “Designed” to run windows from day one of the availability of APU/SOC type systems, that may have also had a discrete GPU, and an integrated GPU! It is the responsibility of the makers of the OS, even more so than the Laptops/PCs OEMs, to make sure that all of the computing hardware plugged into to the computer IS available for computation all of the time, for both general purpose compute, and graphics. What a rip off has this switchable graphics been, when the OS maker/s should have required that all GPU makers have their products working for their intended purposes, even alongside their competitors products if the computers had GPUs from more than one maker plugged into the computer at the same time. People should not be all oh wow, M$ will now have a DX 12 graphics API that will allow multi-adaptor it should be WTF M$ why did your OS not do it’s job and have this feature years ago.
You would have never found a mainframe OS that could not utilize its CPUs and Vector processors(what modern GPUs were derived from), if it had a vector unit, for all tasks! That ability was in the mainframe OS, and there where libraries for using CPU, and vector processors for computation, before there was ever a vector processor adapted for graphics uses only, GPUs always had the ability to crunch numbers, so what gives the makers of the “modern” microprocessor based OSs any excuses for not having HSA, and GPGPU years ago!!!
There where HSA types of systems in the 1960s, so it must be just simple greed on the part of the OS makers, the GPU makers, and The APU/SOC makers, but I blame M$ mostly for all along never really making its OS, and OS APIs HSA aware a long time ago, as M$ were too busy fixing the cosmetics of their OS, when they should have kept the same basic interface that was perfected with windows 7, and spent the rest of their resources on the under the hood part of their OS that still needs more work, especially in the area of HSA, and multi-adaptor technology. What a waste of available computing capacity for users to have an integrated GPU unable to be utilized for any purpose while a discrete GPU does graphics, or more than one discrete GPU unable to be utilized, just because they came from a deferent GPU manufacturers. Users should have had this “Multi-adaptor” OS ability the very first year after the discrete GPU was available on the marketplace. Think back to the first year that discrete GPUs where available on PCI cards, this is when “Multi-adaptor” should have been available from M$!
Oh please, at some point even
Oh please, at some point even the best programmers must admit abstraction begins to eat away at performance flexibility. We ran into this problem with DX 11, and we’ll run into it again later down the line. The engine designers have absolutely no duty to make the usage abstract and easy. They have a duty to make something truly powerful and flexible. If game programmers can’t keep up, it’s time to fire the old guard.
The majority of the world’s programmers age 25 and up are very familiar with the concepts of Operating Systems even if not the intimate details of a particular one, and those guiding concepts shape most software development. Game programming has reached a cross roads. Either the programmers have to evolve and grow, or the industry will stagnate and die. The tools are readily available to help them grow and evolve. It’s up to the programmers to use them.
You greatly overestimate the ability of systems to take advantage of such new infrastructure, especially in the days of far smaller RAM and storage media with much slower access times. You have to craft software for the platforms you’re targeting, and from a business perspective you always aim for the lowest common denominator. On Windows XP that went all they way back to the first Pentium and i386 chips. With Windows 7 that still went back to P4/PD. With Windows 8.1 the lowest common denominator is Core 2 processors with SSE instructions.
You also have to realize the sheer difficulty of designing an API and suitable compiler extensions for such low level access as HSA in a way that is extensible, powerful, and human-readable, meanwhile also having to develop the machine-code level protocols to pull it off, much less pull it off well.
Again, that’s an arduous task which took Microsoft all 5 years between DX 11 release and DX 12 to get down, and we all know there will be bugs in the initial releases. If system programming was easy this all would have been done ages ago. You have sorely unreasonable expectations. Even the world’s foremost experts only produce 16 lines of code a day which passes all tests and can be committed to a final project. It takes months if not years of planning, and certainly months and months if not years of programming, testing, deploying, gathering data, tweaking, etc..
Oh yes you could. Hell most IBM systems standing today are not so robust as you assume.
You vastly overestimate game
You vastly overestimate game programmers’ abilities to move up to multi-core when they haven’t been able to make a multi-core physics engine for more than a decade despite the existence of OpenMP which makes it easy as pie to do. You have to remember that caliber of programmer is three or four tiers lower than the people who do parallel programming for a living.
I think Intel should release
I think Intel should release one with Desktop CPUs, not notebook ones. Don’t question its possibility, because Fujitsu did it. IT is “Dual Core”, and dual core is the best CPU in the world. I ask Intel to release 95 w and 165 w dual core CPUs which the second one is with 2048 bit Vector (AVX).
In side of graphics, a gpu chip is so cheap, and non Intel ones can have it, and Intel only produces real CPUs.
Thanks!
Just created an account on
Just created an account on this site. It would be nice if the PCPer LIVE timings reflected my timezone as well.
Lots of buyers will use it as
Lots of buyers will use it as an htpc running xbmc/kodi or whatever. Do you have more details about the iris 6100 card ? does it support hdmi 2.0; hdcp 2.2; can this card decode hevc ? ….
yes if you can find a box
yes if you can find a box with a proper port, I don’t know, and hevc is decoded in a hybrid fashion with some fixed function units and some brute-force GPU processing. Unfortunately HEVC was finalized so late into the development process, and Intel’s graphics team is tiny compared to AMD’s/Nvidia’s, that a full-blown transcoder did not make it into the Broadwell generation of graphics architecture for Intel.
I have a noob (or even
I have a noob (or even ignorant) question, will it be good for 4k content playback, or is over kill?
It can support 4K resolutions
It can support 4K resolutions at 24Hz; think movies, not gaming.
Much as I would love to own
Much as I would love to own one of these deceives for the living room it ant going to happen. Intel is treating these devices like a mobile laptop just without a screen. To get the device up and running it would definitely run you in $800 dollars or more. The M.2 Storage on amazon is ruffly $70 dollars and for 8GBx4 LPDDR3 Kit ruffly the same $70 dollars, and then theres the operating system depending upon what version you favor. Sure the NUC is kinda ripping of Gigabyte NUCs, I could do the very same with a ITX Form Factor and have the same flexibly plus more.
Intel Compute Stick on the other hand seems really interesting when you weight out the factor that you wont be doing much with it besides your normal web surfing, watching Youtube videos, listening to music or streaming Netflix movies. The downside of the Compute Stick in some reviews I read, is that their reporting really bad WiFi connections and I personally don’t have time to be playing around with my network just to get a decent connection.
Has of now there wont be ant SFF devices happening in this household this year until the prices of the deceives come down.
PhoneyVirus
Twitter
How much noise did the fan in
How much noise did the fan in the i7 NUC make? (Compared to an i5 NUC, for example.) Thanks!
I noticed the slot for a
I noticed the slot for a drive is 9.5mm. I recently bought a 2TB 2.5″ HDD, but it’s 15mm high. Just a word of caution when considering 2.5″ drives.
Does anyone know if the
Does anyone know if the Displayport or the HDMI can do 4K at 60 Hz? Considering a large 4K and would prefere display port @ 60 hz and maybe I would also hookup a HDMI non 4K
Thoughts?
The technical reference for
The technical reference for the NUC5i7RYB states that the maximum supported resolution is 3840 x 2160 @ 60 Hz, 30bpp. The Mini DisplayPort is compliant with the DisplayPort 1.2 specification.
I got muself one of those friday, and is equipping it wit a M.2 2280 SSD, and 2X8 GB DDR3L 1600 Ram modules. It will power a 4k 55″ touchmonitor for demopurpose, and I hope it will run as smooth as expected.
I just got my NUC5i7RYH and
I just got my NUC5i7RYH and SM951 NVMe 256GB delivered and I’m having an issue getting the NUC to recognize the SM951 as a bootable drive under UEFI boot. Does anyone know how to do this?!?
Any hints are much appreciated as I’m still clueless after a week of searching for an answer.
Btw – I want to install Win7x64PRO.
Can we get Matt C’s
Can we get Matt C’s information? Like the lid and looking at having a few made for our company.
Thanks!