Intro, Goals, and Hardware
Threadripper’s 32 threads are perfect for our new high-end Plex server.
Regular PC Perspective readers probably know that we're big fans of Plex, the popular media management and streaming service. While just about everyone on staff has their own personal Plex server at home, we decided late last year to build a shared server here at the office, both for our own day-to-day use as well as to serve as the backbone of our recent cord cutting experiment.
You can run a Plex server on a range of devices: from off-the-shelf PCs to NAS devices to the NVIDIA SHIELD TV. But with many potential users both local and remote, our Plex server couldn't be a slouch. So, like the sane and reasonable folks we are, we decided to go all out and build a monster Plex server on AMD's Ryzen Threadripper platform. With up to 16 cores and 32 threads, a Threadripper processor would give us all of the transcoding horsepower we'd need.
It's now been several months since our Plex server was brought online, and so we wanted to share with you our build, along with some discussion on why we chose certain hardware and software.
Goals
First, let's take a brief look at our goals and requirements for our Plex server. Since we were building an overpowered system, we decided to also migrate our local PCPer file server to the new hardware as well.
This required reserving about 15TB of storage and it also determined our choice of operating system. If we were building a server exclusively for Plex, an operating system such as FreeNAS or unRAID would have been ideal. But with the server needing to pull double-duty for office work, we decided to go with Windows 10 Pro.
The Plex Media Server application runs great in a Windows client environment, and sticking with Windows let us maintain compatibility with our existing backup and testing workflows. It's possible of course to use one of the aforementioned storage-focused operating systems and virtualize Windows as necessary, but we wanted to both keep things simple with a single OS as well as have access to native performance when we needed it.
We also briefly looked at running Windows Server for its improved management features and its ability to better handle and defer those unfortunately timed Windows Updates, but we determined that Windows Server's improved capabilities didn't justify its higher cost, at least when it came to our needs.
System Hardware
As mentioned, we went with Threadripper for this build, specifically the top-end 1950X. AMD's Ryzen lineup may not be the best choice for single-threaded workloads, but for tasks such as transcoding multiple video streams simultaneously, the 1950X's 32 threads are hard to beat.
At the time we were planning this build, there weren't many Threadripper-compatible motherboards available. We therefore went with what we could find, which was the Gigabyte X399 AORUS Gaming 7. It's a fine board that has thus far served us well but, as its name states, it's a gaming focused board that has more features than we need. A build like this doesn't demand a feature-rich motherboard, so if we were building it today, we'd likely go with a cheaper option.
Similar to the motherboard situation, there were not many CPU coolers available at first either. Thankfully, one of the few that was available was the Noctua NH-U9 TR4-SP3, a 92mm dual-fan cooling solution that has kept our 1950X, at stock frequencies at least, humming right along.
RAM is important with the Threadripper platform, and while we didn't go with the fastest memory on the market, we ended up with 32GB (4 x 8GB) of Corsair Vengeance LPX at 3200MHz. As we'll see later on in the "Performance" section, the 1950X performs just fine at stock processor and memory speeds.
We also needed a video card to connect to our server room monitor. Although Plex now supports hardware-accelerated transcoding, we didn't need to worry about that as our 1950X would be more than capable of handling our video encoding needs. We therefore went with an NVIDIA GTX 750 Ti that we had on hand here at the office. We chose this card not only because we had one available, but also because it does not require external power, helping us keep cables at a minimum for a cleaner layout and improved airflow.
To take advantage of our office's recent upgrade to 10Gbps Ethernet, we equipped the server with an Intel X540-T1 NIC. This is an older model that Intel has since replaced, but it still performs great and can be found used (or in some cases even new) for between $120 and $150. There are cheaper 10GbE NICs out there, but these Intel cards have been rock-solid in both our server and in many of our workstations. A 10GbE upgrade isn't necessary in most cases for just Plex, but since this server will also handle our PCPer data, including large 4K video projects, we wanted to ensure the best performance possible.
We'll dive into the storage aspects of our server next, but to round out our build hardware, we used a spare 120GB Samsung 840 SSD for our boot drive. We're storing our Plex database on another dedicated drive (a 1TB Samsung 960 Pro), so we didn't need a large or especially fast drive just to run Windows.
Obviously not wanting to
Obviously not wanting to divulge too much information about your internal workings but based on the story it sounds like the server is used for file storage and Plex. I don’t understand why FreeNas or something similar wasn’t used based on the description given in the story.
BUT also FreeNAS doesn’t have great support for AMD Ryzen/Thread Ripper at this time to my knowledge.
Honestly, I spent several
Honestly, I spent several weeks trying a lot of operating systems and solutions including FreeNAS and some other Debian-based server environments (OpenMediaVault). The answer is we live in a Windows environment here, and all things said and done, Windows to Windows filesharing was the easiest and most compatible.
I'm sure we could have configured SMB on FreeNAS to work perfectly, but at some point we had to go ahead and implement the server instead of spending more time on it.
I was very impressed with the time I spent with FreeNAS and OpenMediaVault (shout out if you've never taken a look at it, it's really neat!). However, when the time came, Windows was the best solution for us at the time.
UnRAID would have been
UnRAID would have been perfect for you guys. You would have been able to use built in docker for Plex and use the built in virtualization system for running Windows. Using the features available to Intel and AMD systems, you could have assigned 4 (or more) cores just to Windows and the rest for everything else.
No, they said they wanted
No, they said they wanted native performance at certain times. Meaning not only bare metal performance, but also access to all cores. It’s in the article.
Also in the article is a comment that they don’t have time to deal with it now, but know they will need to eventually.
The Plex jail vm for FreeNAS
The Plex jail vm for FreeNAS is community maintained and runs far behind the Windows build in terms of features. I ran it for awhile and finally abandoned it for the Windows platform. Sad, but if Plex is your killer app on your storage system, that’s the way it goes.
You have to update Plex in
You have to update Plex in the jail yourself. There’s an old forum post out there on either the Plex or FreenAS forum that will walk you through it. It’s a pain, but takes 1 minute.
Use raw freeBSD instead, and
Use raw freeBSD instead, and make a full-blown FreeNAS or TrueNAS. i build several boxes already for the past 6mos. They all are performing at PAR.
FYI:
http://www.icydock.com/g
FYI:
http://www.icydock.com/goods.php?id=255
+
http://highpoint-tech.com/USA_new/series-ssd7120-overview.htm
+
https://www.newegg.com/Product/Product.aspx?Item=N82E16817801139
(Allyn has one of the latter)
Nice to see ThreadRipper
Nice to see ThreadRipper being able to perform well as a server platform.
Was there any thought of
Was there any thought of doing an Epyc 7401P based system? Or did it more matter that you likely had most of the parts for Threadripper available from previous reviews and such?
Epyc is the better platform
Epyc is the better platform for workstation/server usage and those 128 PCIe lanes and 8 memory channels per socket. So the dual socket Epyc SP3 MBs with 16 memory channels(8 per socket) systems with 2 EPYC 7251 8 core CPUs at around $500 each may be great if total memory bandwidth is needed for decoding workloads.
Threadripper MBs are not Certified/Vetted for ECC memory like the Epyc MBs are. The more memory slots the more memory population options there are for the user to make use of lots of lower cost low capacity DIMMs as opposed to fewer of the higher cost high capacity DIMMs. A dual socket Epyc MB and 16 memory channels is currently less costly than some of the few single socket Epyc/SP3 MB options if the user is not looking for much in the way of lots of PCIe x16 slots like that Gigabyte single socket Epyc/SP3 MB offers for around $610-$650.
The Epyc MBs/CPU SKUs also get the 3+ year warranties and other features not available on the consumer MBs.
This is getting old.
This is getting old.
It never gets old that with
It never gets old that with that consumer crap touted as Workstation/Server grade and the Gaming morons who eat that crap up. AMD’s Epyc SKUs are a better feature for feature deal than any consumer Gaming Moron tat! This is not Intel’s overpriced Meltdown affected offerings as Epyc is pleny affordable relative to any of AMD’s consumer branded stuff.
No one give a rats A$$ about overclocking and gaming where workstation/servers are concerned. Epyc represents a better value than any Consumer Threadripper CPUs/MBs and the little Gaming Gits concerns will matter less and less for both AMD and Nvidia as far as GPUs are concerned.
Take Your Threadripper and its “ECC” compatability and limited MB features compared to Epyc/SP3 and get the Fudge Out of here. You can not fool anyone with that consumer “workstation” nonsense anymore.
Stupid gamers how’s that GPU availability going for your little gaming usage now that GPUs have more uses than only gaming. Epyc kicks Threadripper’s A$$ for workstation/server value any time and any place.
Vega is not here even now for gaming in any numbers but Vega 10 is sure inside those Radeon Pro WX 9100s and Radeon Instinct MI25 along with the real workstation Epyc/SP3 MB SKUs with real grownup ECC certification/vetting and real warranties and long term firmware/driver support.
Availability of components
Availability of components was a major factor in the decision making. We started this build months ago and barely had any choice in motherboard for Threadripper, let alone Epyc. For a true enterprise-class situation, Epyc would have been worth waiting for. In our case, while we have more going on than most home/smb setups, we didn't necessarily need that level of performance or enterprise/server feature set.
Although man, that 7401P offers a heck of a lot of performance for its price point, at least for heavily multithreaded workloads.
Is there something inherently
Is there something inherently “wrong” with Windows Storage Spaces? I’m running it now on my HTPC about 6Tb of media. I don’t have any feelings for or against it, but would love to know if I should migrate off it in my next build.
Technically, win 10 storage
Technically, win 10 storage spaces works well, I built a storage/Plex server on it a couple years ago…what convinced me to move to something else was the increasing frequency of Microsoft’s aggressive updates strategy that was frequently rebooting my server when I didn’t want it to reboot.
Gives me an idea of what to
Gives me an idea of what to do with my 1950x threadripper system when it becomes obsolete in 10 years 🙂
https://pcpartpicker.com/list/C6Tryf
I had the same idea but for
I had the same idea but for my 5960X system in probably much less time.
Isn’t Plex working on a
Isn’t Plex working on a server-less system already, so the need for Plex Servers is redundant someday soon? I hope so.
Well, the Plex Server
Well, the Plex Server application has long been available on some NAS devices, as well as the NVIDIA SHIELD. For most users, either of those options are likely just fine. The primary factor is the device's ability to transcode media when a Plex client isn't able to directly play it. In the case of the SHIELD and high-end NAS devices, they can handle one or two simultaneous transcoding sessions without issue.
On the lower-end NAS devices, or in the case of high resolution HEVC source files, you may not be able to transcode to Plex clients. In that case, you could "optimize" your media files, either manually with Handbrake, FFmpeg, etc. or by using Plex's built-in optimize feature. What this does is create a version of your file that can be directly played on your Plex clients (iPad, Fire TV, etc.), allowing your NAS device to just stream the original file without needing to transcode it.
The downsides of this approach are that it takes up more storage space for the new "optimized" file (unless you choose to delete the original file after optimizing) and the time it takes to re-encode your media library into a direct-playable format, which could be days or weeks depending on the number of files and speed of your computer or NAS processor.
So, in our case — and likely for the foreseeable future — there's definitely still a good use case for dedicated Plex servers. But for many users, the ability to use a single small NAS or media device as a Plex server is already available.
Why Windows for a server? I
Why Windows for a server? I know it’s harder to take other options but Linux with ZFS (Ubuntu for example) works great with Plex and Samba is so trivial to setup. You can even easily separate your workloads with KVM. I work in enterprise and we avoid windows like the plague; too unreliable, hard to maintain and insecure.
Just feel that given the technical depth your team is renowned for you would absolutely have been able to get better results by avoiding Windows.
Windows is good, just not the
Windows is good, just not the client version. Get a Server version and you can easily tell it when to install updates and how/when not to reboot unless you explicitly allow it. It does cost a bit more than a client license, but this is something that is supposed to be reliable and last a long time. You can always use it on the “next” server due to MS’ very long support.
The headline on the front
The headline on the front page reads “32 cores”…
Thanks for catching that —
Thanks for catching that — fixed!
A nice system, very effective
A nice system, very effective no matter what others say about consumer grade stuff in an enterprise work environment.
My home NAS is not powerful enough to run as a plex server (it is old), an upgrade would be good but I think your system might be overkill for me!
Now I will go back to the workarounds to deal with W10 breaking (sorry it is an upgraded security feature!) networking with the NAS (it does not turn up under network in file explorer – not a big issue but still a pain)
One suggestion: You’re not
One suggestion: You’re not mounting the hardware in the rack correctly. Rack mount cases are designed to line up with the “U” markers on the rack. At no point should a case be mounted where it is straddling part of the next section.
In your picture above, you have a 4U case that is literally taking 5U of space because it wasn’t properly mounted. Also, the mount points are off because the mounting holes are not uniform on the rack. This is why you can’t use 4 screws to attach this.
To remedy this, unmount the case, line up the clips on the rack for the screws to match where they are on the ears for the case (or line up the rails…tough to tell from the picture), and have the case go from the bottom of 20 to the top of 23 OR the bottom of 21 to the top of 24.
Nice article. Poor finish!
Indeed. In fact, it’s not
Indeed. In fact, it's not really mounted at all in the article's picture, just sitting on some makeshift rails while we move stuff around. As Ryan mentioned while discussing this article during the most recent podcast, our "server room" is kind of a mess right now, so everything is just thrown together for the time being. I'll be sure to reference your comment when we go to permanently mount the server and our other equipment. Thanks!
Have a look @ sichbopvr. I’m
Have a look @ sichbopvr. I’m slowly migrating away from kodi pvr and media center 10 hack (as they forced 1709 update, broke dvr in MCE).
Does plex allow you to view your media remotely or you using a vpn?
I would not use windows 10
I would not use windows 10 for any media server as windows 10 may just decide to remove some of your saved content if it devides that that content violates some DRM(Digital rights management). So whatever Linux/Open Source without all that spying baked in is probably going to be the best solution.
Wait until 2020 to see how Windows 10 will truely be with even more forcing and even less control over your hardware!
Yes, one of the key features
Yes, one of the key features of Plex is that it allows remote access for both the primary account and any "friend" accounts with whom your media is shared. I believe that currently the only limitation is Live TV. Specifically, if you have Live TV set up and enabled, the primary account can view that stream remotely, but not any shared accounts. However, shared users have full access to live TV programs that have been recorded via Plex DVR.
How did you guys fix the
How did you guys fix the forced Win 10 updates and reboots ? Sorry if this has been asked already ( and if so then mention it and then I will look through all the comments to find it ) as i did scroll down some but did not see it yet ! Since this was built have you done any hardware updates and run into any problems ?
How did you guys fix the
We haven't. There are various unofficial methods to prevent automatic updates in Windows 10 (and some official methods to at least temporarily delay the updates), but we haven't yet implemented those. Part of the reason is we know we need to move to another platform eventually. The other reason is that our workloads don't require 100% uptime, so we've grown accustomed to performing regular updates/maintenance/reboots during downtimes to minimize the chance of things going down during a critical moment.
We haven't made any hardware changes to the server since it was finalized, although we've been playing around with other equipment in an attempt to optimize our 10Gb network. Those changes haven't affected the server. However, as mentioned in the article, the Windows 10 Fall Creators Update caused a number of software issues that took some time to address, including issues with VMware Workstation, our backup software, and some network sharing permissions (the weirdest of which involved the FCU changing our network from Private to Public, which then locked all of our shared apps and services behind the Windows firewall).
Why not just disable Windows
Why not just disable Windows updates? I haven’t updated since some time last year
Let me know how that 16 drive
Let me know how that 16 drive array works out when the first drive fails and you have to rebuild with only 2 drives redundancy.
Fair. But that’s why we have
Fair. But that's why we have multiple backups of this data.
Also was esxi not an option?
Also was esxi not an option?
Honestly, ESXi is where I
Honestly, ESXi is where I think we'll ultimately end up.
How many transcoded streams
How many transcoded streams versus the educated guess?
Check out Byte My Bits on youtube for some good
Plex info.
Maybe you can reach out to
Maybe you can reach out to Gigabyte and ask them, why there’s still no sign of any BIOS Update with AGESA 1.0.0.4 or newer for months now. Just silence…
I am running a dell r610 (
I am running a dell r610 ( software testing ) and a dell r620 ( vmware esx 6 standard ) and both can be nearly silent even in a 1u form factor with dual psu’s with between 12 and 20cores plus HT.
Both use around 100-110 watts at idle from the wall ( measuring both PSU’s ), when configured with mid range 95watt tdp cpu’s
The psu’s are cheap on the open market, can be found new, easy to find, and generally dont fail as they are usually Delta built units.
~~~~~~~~~~~~~~~~
Best idea for the speed you want with some more versatility:
Setup your home brew storage server as an iscsi target on whatever OS you want with a 4 or 8 core cpu and lots of fast ram setup is cache.
Use the SSD’s for an additional cache tier in, boot from USB drives.
Use 10+GBe or FC (iscsi offloading support is better) nics(FC for lower latency) in the NAS box and the same nic in the r610/r620 and run vmware esx on the r610/r620 and host whatever you want (run smaller ssd’s locally for the high iops vm’s and setup iscsi datastore for the lower iops vm’s, boot from the internal SDcard or USB thumb drive (get these sd cards from dell for best reliability)
Pass the FC cards and/or 10gbe nics through to the guest vm to lower latency.
or get the similar config 1u HP servers
Then you can use direct attach cables in the FC nics if they are SFP or SFP+ so you dont need fiber and SFP’s unless you want to also get a FC Switch and run multiple servers connecting to the same NAS/storage array.
Not that much more costly but potentially way more versatile.
How much energy does your
How much energy does your setup use? I’m currently using an old iMac with two external drives attached. I do like that the iMac doesn’t gobble up too much power. Would be curious how your system compares.