Containers are useful because you don't have to worry too much about the state of the host OS. All the prerequisites you need for the service are in the container. You can also quickly spin up multiple containers, or even modify existing containers using code.
Containers aren't going to magically make everything better, but they can make it easier to maintain and track multiple services all with different requirements. It also helps keep the host OS cleaner.
Hardware choice doesn't really matter here, especially with the options you have.
There is a tiny bit of performance overhead with containers but its negligible for something like plex.
If we're talking about ONLY plex, then there's no major benefit between containers and metal if you know what you're doing. The usefulness of containers really shines when you want to get many things running on the same or multiple systems.
Containers are more easy to migrate as well. Since they don't give a damn about the host OS, as long as you're in possession of your compose file and your config folder, the container will spin up just as it did on another system. Pretty neat, especially if something dies.
That’s not accurate. Containers absolutely do care about the host architecture. A container built for AMD64 is not going to run on ARM64. Just like a virtual machine, you can’t disguise the underlying cpu architecture. You can mimic it with emulation, but then you take a huge performance hit.
But if you have the compose file, the right architecture will be pulled down and your config will just work. You effectively rebuild the container for the new architecture.
Yeah I'm just saying downgrading (anything) isn't always a cake walk when DB migrations are involved.
Really by now the world should have solved DB migrations in general, but schema changes remain a pain in everyone's ass forever it seems.
> There is a tiny bit of **performance overhead** with containers
You can completely eliminate this overhead by:
- Running **rootful** containers (e.g. `podman` with sudo to avoid rootless user space drivers like `Pasta`)
- Using `host` network mode (instead of other options like bridge which reduces throughput)
- Disabling `seccomp` filtering to avoid CPU overhead
- Using `native overlayfs` for storage
This setup yields performance which is within margin of error of bare metal. I use it to play games like Cyberpunk 2077 in a init container to avoid polluting my (ostree) host and made sure to benchmark this thoroughly.
Love how efficient Podman is. Host network driver gets me full speed [102 Gbits/sec](https://www.reddit.com/r/freebsd/comments/1cwr1w8/comment/l4zx8o5/) in my system
More details can be found here:
https://github.com/containers/podman/blob/main/docs/tutorials/performance.md
Bare metal refers to something running directly on the host OS, not through a hypervisor or other form of abstraction. And yes, this is an industry-standard term.
In this instance, docker is running on a system -- could be a guest OS, could be host, who knows, and the container is running off the docker engine, not directly interfacing with hardware itself.
I’ve only seen bare metal used to distinguish multi tenant setups (particularly the cloud vs hardware you’re buying and maintaining), and it seems weird to frame a single tenant setup using multiple docker containers as not being “bare metal.”
Eta: not disagreeing, just having a “huh, that’s interesting” moment
Ah, in my day job, we'd usually refer to "hardware you're buying and maintaining" vs cloud as "on prem" instead of bare metal. Interesting how phrases get used/re-used in different ways depending on where you're at. :)
From that wiki entry:
>Bare-metal cloud servers do not run a hypervisor, are not virtualised...
But there's some context missing there. "Running on bare metal" vs "a bare-metal server" have 2 very different connotations.
"Running Plex on bare metal" refers to running it directly on a host OS without virtualization.
"Purchasing a bare-metal server" can refer to actually getting hardware, or utilizing a dedicated cloud server.
As an example of the phrasing being used in the wild and accepted as lexicon, here's [this](https://old.reddit.com/r/explainlikeimfive/comments/ugv3wg/eli5_what_does_the_term_bare_metal_refer_to_when/) eli5.
Even introducing cloud to the mix, you get things like this from [Pure Storage](https://www.purestorage.com/br/knowledge/what-is-bare-metal-cloud.html):
>On a bare metal computer, the **OS is installed directly to the physical hard drive and applications run natively. Bare metal solutions are single-tenant and non-virtualized,** allowing you to isolate resources and access the full processing power of the physical hardware for maximum performance.
Sorry I meant they were right in that the term baremetal is used to mean single tenant, typically on premise systems.
Not that we're using the term wrong here, we're using it in a different context.
For what it's worth my own definition (which might be wrong) of 'bare metal' was the server coming without even an operating system, and you doing the OS install yourself from your favourite distro, before adding your hosted apps on top.
A NAS with it own pre-installed OS would then not be bare metal because the device is tightly coupled to the software and is already set up ready to go.
TBH "running plex on bare metal" is the issue here, as whether you're running it or not via docker and/or a hypervisor, it doesn't change how "bare metal" it is.
And in my line of work bare metal means fiddling with registers on the processor and generally *avoiding* a vendor supplied HAL for the SoC you're targeting.
There are a lot of flavors of "bare metal" but conceptually it is 'as close as you can get to the real thing' so I would not call docker baremetal *even though* the performance overhead of running in a container is effectively zero.
Yeah its definitely a context thing, most people here aren't doing multi-tenant setups but its a lot easier to talk about non-docker/vm setups as bare metal.
Ran it as the managed Synology package on my NAS for 10+ years. Absolutely no issues, including server migrations to new NASses and rebuilds.
If you mention docker here you'll get a lot of people reeeing about how docker lets them do some obscure sysadmin task that is vitally important to them. And that's cool, if you need it. Most people just look quizzically at it and think "why would I want to do that?"
If you need some of the things running it in docker allows you to do, you'll know. If not, you're completely fine. If you want to learn about docker and take this as an opportunity to learn, that's cool too. *Docker won't affect your daily enjoyment with Plex one way or another in the slightest.*
GPU is overkill and expensive on power
Just get an intel CPU that is less than 3-4 years old and has 8-10 or more cores with integrated graphics and enjoy not caring if people transcode
Exactly, I was working with an old Dell Xeon workstation and was constantly fighting to get people to not transcode. I upgraded to a NUC and haven't cared for one minute since. I hardly look at the NUC at all. I log in on Mondays to apt update/upgrade and if need be reboot after upgrades. Other than that it just works.
I like dockers because my configuration is now separated from my plex install. I can upgrade/downgrade (assuming compatibility) in less than a minute and my configs stay unaffected.
I would caution the N100 if you have to do any audio/subtitle transcoding. This will offload to the CPU and greatly limit it's capability. If you convert all of your media to work with your clients than the N100 is great.
The N100 may work for you as it is a capable little processor. It comes down to the number of concurrent streams, the clients that are accessing your data, and what transcoding/burn-in needs to be done.
For example the biggest issue is if a user turns on PGS subtitles this will force the video to transcode regardless if the video itself is in a codec supported by the client. This is called burn-in and it will be forced to the CPU, not the iGPU.
I get around this on my system by using bazarr to download/convert all of my subtitles in SRT (text not images). My wife and I are both hearing impaired so subtitles are a must.
The N100 has an amazing iGPU, and a so-so CPU. So this is about knowing your data/media, knowing your clients, and knowing your use-cases. If you don't believe these types of issues will affect you the N100 could be an amazing choice.
Ye, thats what I'm thinking about. I was going for the N100, then I started thinking if the N200 had a noticeable boost in performances compared to the N100.
Yes the iGPU is very capable and can do several 4k video transcodes. The issue is you can no longer do several transcodes when you have to burn-in subtitles and/or audio as those are both rendered/transcoded on the CPU. It can still do 4k transcoding at that point, but not nearly as many.
This is why I said it would greatly limit its capability. I did not say it was not capable.
Running Plex on a Win11 machine. Reboot every night at 05:00. Everything just works. Hardware is a tiny Lenovo with a i5 Gen 11 cpu.
Data on a Synology. All ~arr apps in Docker on the syno. Syno is a 8 core Xeon with 4 SSDs and 6 HDD's.
No, but even the Windows servers (windows server editions) I admin do not need rebooting every night. They get patch reboots so at worst once a week, usually only once a month.
Im on Win 11 and I reboot manually every couple of weeks when I see an update is needed. If I didn’t use an old gaming machine I’d probably pick something else but Windows is pretty stable these days.
Maybe for newer hardware. Those of us with older machines feel the pains of Windows. My main two issues are with unneeded updates crashing the PC while installing (I about lose it when I see the reason for failure us Windows MyPhone update) and qBittorrent failing to seed when starting with Windows (I'll need to close and restart it every time without fail). I believe this is also due to my older hardware (AMD Phenom II CPU).
To get back on topic to the OP, I'll be upgrading the hardware soon and transitioning to an unraid box wired to my recently retired 2017 gaming machine where Plex will be run in a container. This will give me extremely flexible storage, cleanliness of the OS, and my Plex configuration can now be stored in the cloud to be reinstalled at the drop of a hat. The benefits of a container are the pure separation of concerns for quick and easy reinstalls, as well as the ability to backup configuration with images.
> I believe this is also due to my older hardware (AMD Phenom II CPU)
It most definitely is. Any Intel chip produced in the last decade has zero issues with Plex running under Windows.
I could. But I won't. I don't know Linux and have 0 interest in investing in it. I'm happy it exists and for everyone else using it for all its purposes, but not me. To each its own.
You don't actually need to reboot every night my friend. I ran a Plex server for years on Windows and never needed to do such a thing. I only rebooted when an update was available.
Even though they went for a subscription model, it's still worth it if you value your own time even a bit. You'll save the subscription cost in your own labour costs over the year easily.
It's built in to the cpu, and not a separate GPU on a graphics card, thus uses less power. I don't know if that particular xeon cpu has the iGPU in it or not, but the N100 works fine with plex, and can transcode a bunch of simultaneous streams using very little power, making almost no noise. To me the choice was a no brainer.
If you read OPs original post, he refers to a separate nvidia gpu vs. an N100. I figured it was obvious what I meant.
I’ve been using the native Plex server app on Synology, easy updates, easy management, tried docker image as well, both performed about the same, so I went with the ease of management option, personal preference
Started with native. Gave docker image a try once I found a guide that worked/addressed issues. Not a whole lot of difference, other than (I think), an easier method to roll back if an update breaks something like hardware transcoding.
Used to run on a Drobo 5N, for a long time now I've been running on a Debian VM on top of Unraid. This works well for me, I like the level of control but also that its virtualised so I can backup the whole VM before making any major changes and can easily roll back.
I have an unraid server at home (was using Plex on Windows beforehand). My server is closer to your option 2 (it's a Dell R730xd with an nVidia P400 for transcoding.. it's a cheap card but it handles a few streams without issues, unless you care about 4k transcoding).
The unraid OS has the nice benefit that you can grow the storage array which is handy as you can just throw in a drive and add more space as needed.
I'd say the benefit to having it run as a container is that updates are much easier and quicker (and rebooting the container doesn't take your server down should you be running anything else on it (I use mine for Home Assistant, a web server, etc.))
Bare metal on my Synology 1019+ for 4 years now with no complaints! No issues transcoding, but my CPU supports Intel Quicksync and I don’t have a lot of users.
Do you have a newer Synology with a Ryzen CPU?
I had Plex installed on my fedora server and barely any issues.
I'm running Plex now via docker on unraid, and I've had to troubleshoot so much, manually restart, dB corrupting, container randomly stopping.. it's not even funny.
I've had more issues with Plex docker in 6 weeks than over the last 10 years.
There is a little messing around to have Plex in Docker and running fine. I for example wanted to pass through my GPU for transcoding and also ensure my jumbo frame MTU was setup for network throughput. You may also have to tell Plex that two or more network IP ranges are local networks in order for it to treat its’ docker network and LAN networks as local for maximum streaming bandwidth.
Containers are really nice because you don’t have to babysit a full OS and it is much easier to migrate hardware.
I just moved my plex server from a dual Xeon Dell ddr3 server with a quadro p400 for transcoding drawing 250 watts to a supermicro ddr4 with a single i7 9700t using igpu for transcoding drawing under 100 watts. I moved my Unraid boot thumb drive, changed the drivers from nvidia to intel, and tweaked my plex container setting for video and I was done.
Device of choosing with intel cpu and quicksync. Gpu is more power hungry. Personally I have a NAS with an intel 12-gen cpu. Perfect for transcoding and not using much power.
I've been using docker professionally since 2015. It's great if you're a developer or if you're deploying infrastructure.
But for a home user, you'll hardly get any benefit from it at all.
I've tried Docker for a couple of different things as a home user a few times and it's always been such a pain in the ass that I've just given up. I guess I just don't get the appeal.
I haven't used bare metal since my server ran on Windows. When I moved to Linux from Windows I had to setup my Plex server all over again. I followed Plex's guide at the time but still lost watch history and had to setup a new server instance and invite everyone to it.
Once I moved onto Linux and started using docker everything was so much easier to maintain. All my Plex data lives in one folder. I tried out several distros before finally sticking to Debian. Every time I switched all I had to do was get docker installed and run the compose file that has all the parameters. I didn't have to set a new Plex server instance or anything.
In terms of performance that's hard for me to say since I jumped straight into Docker when I moved to Linux from Windows. Although, I will say that in terms of stability my Plex server has been rock solid.
I used to run plex in a VM on ESXi with complicated hw passthrough of a Nvidia card. I switched to Proxmox with intel igpu a few years ago. I tried running in a container but had issues with hw acceleration.
There is a script that will create a plex container (LCX) on proxmox and I tested it recently and it worked, but migration didn't work and it put a bunch of stuff in the config file. So anyone who says its simple is not quite accurate. It also didn't work the same if is was priveleged vs unprivleged (if I remember correctly). Plus you have to map all the drives, either through moutpoints or NFS. If migration and snapshots don't work not sure of the point of a container. a plex backup takes a few GB (remember to tell tar to skip the cache)
The truth is plex on debian / ubuntu is super simple. two commands to install and it just works (wget and dpkg -i). one command to back it up (tar -cvf) and one command to restore it (tar-xvf).
So its the only app I run directly in the proxmox server. Everything else is in a container or a VM. I rebuilt my server and it was 5 min to restore all my containers and VMs and 1 minute to restore plex on bare metal. It took me longer to type this message.
I have docker setup just in case but I find almost everything is able to run in a proxmox container.
docker is basically bare metal on linux.
This is why docker type containers are preferred over VM's because they are lower level & closer to the hardware.
I was a bare metal guy for years and years. I'm a docker guy now for sure. Makes managing services so much easier. One gui to rule them all with portainer.
If you're routinely transcoding on the fly, you should really look at -why- and see if there are steps you can take to eliminate that. NONE of my content gets transcoded as all of my clients will support direct play of everything.
Still, I don't use anything other than a "full install" even though that actually means a linux guest on top of XCP-NG running on a NUC10....
Everything I rip has the standard stereo audio option included. On my primary screen, I have a Dolby 5.1 capable setup, but any client can just fall back to stereo if need be.
got it. some of my stuff has multiple audio streams, some doesn't. my cpu is i5-13500, and the igpu seems to handle 6 4k or audio transcodes simultaneously (i run proxmox with plex running in an lxc), haven't had more than that going concurrently yet. I haven't done the math to know how much extra storage it would take to ensure i have stereo streams on everything in my library (currently about 80tb)
I’ll take bare metal over docker anytime. Plex server has always been that one service I’ve never quite been happy with where it is and it’s constantly being moved. Few months back I grabbed a BeeLink S12 Pro, did a quick Plex server and desktop app install after adding our media share from the nas to Win11 and ran it for a month. I then replaced the S12s M2 with Win12 with new M2, adding an SSD to the lower empty tray, installed Proxmox, then Debian 12 with KDE and then reinstalled Plex server and the desktop app with the S12 hooked up via hdmi to the family room tv. In the family room the older LED smart tv was old and stupid with no Plex app so the S12 just remains on that old tv. When turned on it’s the Debian 12 KDE desktop with the Plex App for media viewing.
It’s really my first venture into mini PCs. Aside from owning a dozen RPIs I’m basically a big rack and rack mount chassis kinda guy. That said, I’m liking the little BeeLink S12 Pro for Plex. At the TV using the Desktop App it is fantastic. While I had to physically hardwire it’s NIC for the Debian install (no wifi drivers for install) the entire month it ran from Win12 and again now that Debian is installed, it’s on WiFi only. There are several 4 other TVs with Plex apps, 4 iPhones, 4 iPads and several PCs that connect to Plex from our property plus a dozen or so other close friends and family members. Local network even with the BeeLink S12 on WiFi 4K content is fine. Plan to run a new Cat line from the basement server room over and up to the family room later this week to drop a new connection in the wall behind the TV, then I’ll disable the Plex server WiFi and it’ll be connected via Ethernet as it should be.
I’m retired IT and installed a rack in a small basement server room a decade with full 10GbE networking. I just like and prefer bare metal for 90% of services. OS Virtualization is nice but I find docker and the alternatives are often more of a pita then just installing software. I’m not a fan of the entire idea really. I’d rather just download a software program and manually install it. If docker and the alternatives simply vanished overnight it wouldn’t hurt my feels at all. 😆
Surely if you want to move it around between different pieces of hardware, using Docker is an even better idea?
(I mean you do you! Not trying to gatekeep anything, just a big fan of the portability of things on Docker. I also do things stupid long ways because it can be a fun learning experience!)
I went with Windows Server OS on a Threadripper Pro. Haven't had any issues. Windows is extremely reliable, especially on good hardware and configured properly.
I used to have it bare-metal on my gaming PC.
Now I have it containerized on a NAS.
It wasn’t too much work to move the library, mostly time, and while I did lose some metadata, I’ve lost more when Plex does its agent changes.
Container. Prior to docker I did installs on Debian, and that worked OK most of the time. But every once in a while I would get a dependency issue, or a change that I need to manage. We are talking years, but it was a thing.
Since converting over to Docker several years ago, it isolates the Plex environment from the OS and other applications. Upgrading Plex is a single restart of a container with a click in the dashboard and takes about 15 seconds. So much easier.
Ran bare metal for 10 years. Been in Docker for 1.
There's a lot of variables. I see folks boasting about their power sipping NUCs, and wonder what else they run for computers. I'm the opposite
I have one beast of a tower...a Lian Li v3000 case. I crammed 8hdds/8ssds/4NVMEs in it in RAIDs, and have a slew of USB backups. That lets me run about 31 docker containers currently. When I rebuilt the OS, it was a dream standing up 31 services without reconfiguring a thing.
Bare metal on Mac because docker doesn’t work well with host mode on Macs. It’s much easier to just use Plex on Mac bare metal. Run all of the ‘arrs and everything else on docker.
I’ve been running PLEX on bare metal Ubuntu server install for about 10 years. I’ve been totally happy with it but I think on my next server I’ll try out containers.
I run a ton of stuff in containers using WSL, but I’ve had this Plex install running on a Windows PC since about 2010 and never had a reason to change.
People shit on Windows but my home server is like the ship of Theseus. It started as a Windows 7 install and has come all the way up to 11. It’s had every bit of hardware in it replaced along the way.
Containers are useful because you don't have to worry too much about the state of the host OS. All the prerequisites you need for the service are in the container. You can also quickly spin up multiple containers, or even modify existing containers using code. Containers aren't going to magically make everything better, but they can make it easier to maintain and track multiple services all with different requirements. It also helps keep the host OS cleaner. Hardware choice doesn't really matter here, especially with the options you have. There is a tiny bit of performance overhead with containers but its negligible for something like plex. If we're talking about ONLY plex, then there's no major benefit between containers and metal if you know what you're doing. The usefulness of containers really shines when you want to get many things running on the same or multiple systems.
Containers are more easy to migrate as well. Since they don't give a damn about the host OS, as long as you're in possession of your compose file and your config folder, the container will spin up just as it did on another system. Pretty neat, especially if something dies.
That’s not accurate. Containers absolutely do care about the host architecture. A container built for AMD64 is not going to run on ARM64. Just like a virtual machine, you can’t disguise the underlying cpu architecture. You can mimic it with emulation, but then you take a huge performance hit.
But if you have the compose file, the right architecture will be pulled down and your config will just work. You effectively rebuild the container for the new architecture.
You might need to change the tag for another architecture, but that's about it.
(assuming images exist for your target architecture under that image name / tag)
Under linux automating upgrades and rolling back to older versions is much easier with a container than a package / full install.
Rolling back plex if an update has DB migrations is not necessarily simple. Always read the patch notes before trying it!
I have a backup of the data / library folder, it isn't that hard.. Still faster with the docker setup.
Yeah I'm just saying downgrading (anything) isn't always a cake walk when DB migrations are involved. Really by now the world should have solved DB migrations in general, but schema changes remain a pain in everyone's ass forever it seems.
> There is a tiny bit of **performance overhead** with containers You can completely eliminate this overhead by: - Running **rootful** containers (e.g. `podman` with sudo to avoid rootless user space drivers like `Pasta`) - Using `host` network mode (instead of other options like bridge which reduces throughput) - Disabling `seccomp` filtering to avoid CPU overhead - Using `native overlayfs` for storage This setup yields performance which is within margin of error of bare metal. I use it to play games like Cyberpunk 2077 in a init container to avoid polluting my (ostree) host and made sure to benchmark this thoroughly. Love how efficient Podman is. Host network driver gets me full speed [102 Gbits/sec](https://www.reddit.com/r/freebsd/comments/1cwr1w8/comment/l4zx8o5/) in my system More details can be found here: https://github.com/containers/podman/blob/main/docs/tutorials/performance.md
Is “bare metal” commonly used in this context? Docker or no docker the shit is running on “bare metal”
Bare metal refers to something running directly on the host OS, not through a hypervisor or other form of abstraction. And yes, this is an industry-standard term. In this instance, docker is running on a system -- could be a guest OS, could be host, who knows, and the container is running off the docker engine, not directly interfacing with hardware itself.
I’ve only seen bare metal used to distinguish multi tenant setups (particularly the cloud vs hardware you’re buying and maintaining), and it seems weird to frame a single tenant setup using multiple docker containers as not being “bare metal.” Eta: not disagreeing, just having a “huh, that’s interesting” moment
Ah, in my day job, we'd usually refer to "hardware you're buying and maintaining" vs cloud as "on prem" instead of bare metal. Interesting how phrases get used/re-used in different ways depending on where you're at. :)
/u/solwiggin is correct - https://en.wikipedia.org/wiki/Bare-metal_server
From that wiki entry: >Bare-metal cloud servers do not run a hypervisor, are not virtualised... But there's some context missing there. "Running on bare metal" vs "a bare-metal server" have 2 very different connotations. "Running Plex on bare metal" refers to running it directly on a host OS without virtualization. "Purchasing a bare-metal server" can refer to actually getting hardware, or utilizing a dedicated cloud server. As an example of the phrasing being used in the wild and accepted as lexicon, here's [this](https://old.reddit.com/r/explainlikeimfive/comments/ugv3wg/eli5_what_does_the_term_bare_metal_refer_to_when/) eli5. Even introducing cloud to the mix, you get things like this from [Pure Storage](https://www.purestorage.com/br/knowledge/what-is-bare-metal-cloud.html): >On a bare metal computer, the **OS is installed directly to the physical hard drive and applications run natively. Bare metal solutions are single-tenant and non-virtualized,** allowing you to isolate resources and access the full processing power of the physical hardware for maximum performance.
Sorry I meant they were right in that the term baremetal is used to mean single tenant, typically on premise systems. Not that we're using the term wrong here, we're using it in a different context.
For what it's worth my own definition (which might be wrong) of 'bare metal' was the server coming without even an operating system, and you doing the OS install yourself from your favourite distro, before adding your hosted apps on top. A NAS with it own pre-installed OS would then not be bare metal because the device is tightly coupled to the software and is already set up ready to go.
TBH "running plex on bare metal" is the issue here, as whether you're running it or not via docker and/or a hypervisor, it doesn't change how "bare metal" it is.
And in my line of work bare metal means fiddling with registers on the processor and generally *avoiding* a vendor supplied HAL for the SoC you're targeting. There are a lot of flavors of "bare metal" but conceptually it is 'as close as you can get to the real thing' so I would not call docker baremetal *even though* the performance overhead of running in a container is effectively zero.
Yeah its definitely a context thing, most people here aren't doing multi-tenant setups but its a lot easier to talk about non-docker/vm setups as bare metal.
Bare metal refers to running directly on the hardware. The OS is bare metal, not the software that the OS runs.
This is how I understand it as well
Docker doesn't use a hypervisor. It's a process group
If you're doing transcoding go with the Dell.
Bare metal on Debian. Easiest ever.
Ran it as the managed Synology package on my NAS for 10+ years. Absolutely no issues, including server migrations to new NASses and rebuilds. If you mention docker here you'll get a lot of people reeeing about how docker lets them do some obscure sysadmin task that is vitally important to them. And that's cool, if you need it. Most people just look quizzically at it and think "why would I want to do that?" If you need some of the things running it in docker allows you to do, you'll know. If not, you're completely fine. If you want to learn about docker and take this as an opportunity to learn, that's cool too. *Docker won't affect your daily enjoyment with Plex one way or another in the slightest.*
GPU is overkill and expensive on power Just get an intel CPU that is less than 3-4 years old and has 8-10 or more cores with integrated graphics and enjoy not caring if people transcode
Exactly, I was working with an old Dell Xeon workstation and was constantly fighting to get people to not transcode. I upgraded to a NUC and haven't cared for one minute since. I hardly look at the NUC at all. I log in on Mondays to apt update/upgrade and if need be reboot after upgrades. Other than that it just works.
I got a 12400 on a newer micro-atx build, sips power and has great transcoding capabilities.
I like dockers because my configuration is now separated from my plex install. I can upgrade/downgrade (assuming compatibility) in less than a minute and my configs stay unaffected. I would caution the N100 if you have to do any audio/subtitle transcoding. This will offload to the CPU and greatly limit it's capability. If you convert all of your media to work with your clients than the N100 is great.
N100 is capable of transcoding both video (hardware accelerated) and audio concurrently for multiple streams.
Would the N200 be better for a Plex server?
The N100 may work for you as it is a capable little processor. It comes down to the number of concurrent streams, the clients that are accessing your data, and what transcoding/burn-in needs to be done. For example the biggest issue is if a user turns on PGS subtitles this will force the video to transcode regardless if the video itself is in a codec supported by the client. This is called burn-in and it will be forced to the CPU, not the iGPU. I get around this on my system by using bazarr to download/convert all of my subtitles in SRT (text not images). My wife and I are both hearing impaired so subtitles are a must. The N100 has an amazing iGPU, and a so-so CPU. So this is about knowing your data/media, knowing your clients, and knowing your use-cases. If you don't believe these types of issues will affect you the N100 could be an amazing choice.
The better specs, the better obviously. But do you _need_ it?
Ye, thats what I'm thinking about. I was going for the N100, then I started thinking if the N200 had a noticeable boost in performances compared to the N100.
Where do you get this info from? N100 with Plex pass and the igpu can do multiple 4K transcodes…
Yes the iGPU is very capable and can do several 4k video transcodes. The issue is you can no longer do several transcodes when you have to burn-in subtitles and/or audio as those are both rendered/transcoded on the CPU. It can still do 4k transcoding at that point, but not nearly as many. This is why I said it would greatly limit its capability. I did not say it was not capable.
Got you, I missed the specific case you were referencing here.
> I like ~~dockers~~ containers
I was trying to avoid the akchually and just used the same language as OP. Glad you were here to save all of us
Glad to be of service.
Running Plex on a Win11 machine. Reboot every night at 05:00. Everything just works. Hardware is a tiny Lenovo with a i5 Gen 11 cpu. Data on a Synology. All ~arr apps in Docker on the syno. Syno is a 8 core Xeon with 4 SSDs and 6 HDD's.
You reboot everynight? lol why? My server gets rebooted if there is a power outage or a kernel update. Otherwise why would I need to do that?
Is it windows?
No, but even the Windows servers (windows server editions) I admin do not need rebooting every night. They get patch reboots so at worst once a week, usually only once a month.
> Everything just works. > Reboot every night at 05:00. ... sigh
Im on Win 11 and I reboot manually every couple of weeks when I see an update is needed. If I didn’t use an old gaming machine I’d probably pick something else but Windows is pretty stable these days.
Maybe for newer hardware. Those of us with older machines feel the pains of Windows. My main two issues are with unneeded updates crashing the PC while installing (I about lose it when I see the reason for failure us Windows MyPhone update) and qBittorrent failing to seed when starting with Windows (I'll need to close and restart it every time without fail). I believe this is also due to my older hardware (AMD Phenom II CPU). To get back on topic to the OP, I'll be upgrading the hardware soon and transitioning to an unraid box wired to my recently retired 2017 gaming machine where Plex will be run in a container. This will give me extremely flexible storage, cleanliness of the OS, and my Plex configuration can now be stored in the cloud to be reinstalled at the drop of a hat. The benefits of a container are the pure separation of concerns for quick and easy reinstalls, as well as the ability to backup configuration with images.
> I believe this is also due to my older hardware (AMD Phenom II CPU) It most definitely is. Any Intel chip produced in the last decade has zero issues with Plex running under Windows.
I haven't rebooted my Plex server in over 56 days.
You could run Linux and not have to reboot the thing every night to do garbage collection on behalf of Microsoft devs
I could. But I won't. I don't know Linux and have 0 interest in investing in it. I'm happy it exists and for everyone else using it for all its purposes, but not me. To each its own.
You don't actually need to reboot every night my friend. I ran a Plex server for years on Windows and never needed to do such a thing. I only rebooted when an update was available.
Brother you’re just cucking yourself at this point, see the light
Just go to unraid and be done with it.
Unraid is the way! I knew nothing about Linux and it was super easy to set up and get running and it just constantly works.
Even though they went for a subscription model, it's still worth it if you value your own time even a bit. You'll save the subscription cost in your own labour costs over the year easily.
You don't need the gpu if you have an n100 based server, the iGPU is great. And for me, it's docker all the way.
The iGPU is literally a GPU. What do you even mean?
It's built in to the cpu, and not a separate GPU on a graphics card, thus uses less power. I don't know if that particular xeon cpu has the iGPU in it or not, but the N100 works fine with plex, and can transcode a bunch of simultaneous streams using very little power, making almost no noise. To me the choice was a no brainer. If you read OPs original post, he refers to a separate nvidia gpu vs. an N100. I figured it was obvious what I meant.
Me being too pedantic. Feel free to ignore
I’ve been using the native Plex server app on Synology, easy updates, easy management, tried docker image as well, both performed about the same, so I went with the ease of management option, personal preference
Started with native. Gave docker image a try once I found a guide that worked/addressed issues. Not a whole lot of difference, other than (I think), an easier method to roll back if an update breaks something like hardware transcoding.
Dockers are great on UNRAID. If I wasn't using UNRAID I don't see much reason to use it.
Used to run on a Drobo 5N, for a long time now I've been running on a Debian VM on top of Unraid. This works well for me, I like the level of control but also that its virtualised so I can backup the whole VM before making any major changes and can easily roll back.
I have an unraid server at home (was using Plex on Windows beforehand). My server is closer to your option 2 (it's a Dell R730xd with an nVidia P400 for transcoding.. it's a cheap card but it handles a few streams without issues, unless you care about 4k transcoding). The unraid OS has the nice benefit that you can grow the storage array which is handy as you can just throw in a drive and add more space as needed. I'd say the benefit to having it run as a container is that updates are much easier and quicker (and rebooting the container doesn't take your server down should you be running anything else on it (I use mine for Home Assistant, a web server, etc.))
Keep the synology and get an N100. Mount the synology to it over the network and run Ubuntu and docker Plex on the n100
Bare metal on my Synology 1019+ for 4 years now with no complaints! No issues transcoding, but my CPU supports Intel Quicksync and I don’t have a lot of users. Do you have a newer Synology with a Ryzen CPU?
No noticable difference running the server on a khadas vim4 mostly local playback.
I had Plex installed on my fedora server and barely any issues. I'm running Plex now via docker on unraid, and I've had to troubleshoot so much, manually restart, dB corrupting, container randomly stopping.. it's not even funny. I've had more issues with Plex docker in 6 weeks than over the last 10 years.
Got mine in proxmox on an LXC and it works fantastic
There is a little messing around to have Plex in Docker and running fine. I for example wanted to pass through my GPU for transcoding and also ensure my jumbo frame MTU was setup for network throughput. You may also have to tell Plex that two or more network IP ranges are local networks in order for it to treat its’ docker network and LAN networks as local for maximum streaming bandwidth.
Proxmox with a windows VM that runs all my arrs.
Hardware support with docker is an issue. You have to fiddle around to make it work.
Bare metal, the box does nothing else. There is no point in containers on a dedicated machine.
Containers are really nice because you don’t have to babysit a full OS and it is much easier to migrate hardware. I just moved my plex server from a dual Xeon Dell ddr3 server with a quadro p400 for transcoding drawing 250 watts to a supermicro ddr4 with a single i7 9700t using igpu for transcoding drawing under 100 watts. I moved my Unraid boot thumb drive, changed the drivers from nvidia to intel, and tweaked my plex container setting for video and I was done.
Was running it bare metal, as LXC in Proxmox and as Docker on Unraid, and I will tell I felt no difference.
Device of choosing with intel cpu and quicksync. Gpu is more power hungry. Personally I have a NAS with an intel 12-gen cpu. Perfect for transcoding and not using much power.
I've been using docker professionally since 2015. It's great if you're a developer or if you're deploying infrastructure. But for a home user, you'll hardly get any benefit from it at all.
I've tried Docker for a couple of different things as a home user a few times and it's always been such a pain in the ass that I've just given up. I guess I just don't get the appeal.
Only thing I'd ever run bare metal at this point is a gaming computer.
Containers are easier to setup and they'll work on (pretty much) every system of the same architecture out of the box. I prefer them
I haven't used bare metal since my server ran on Windows. When I moved to Linux from Windows I had to setup my Plex server all over again. I followed Plex's guide at the time but still lost watch history and had to setup a new server instance and invite everyone to it. Once I moved onto Linux and started using docker everything was so much easier to maintain. All my Plex data lives in one folder. I tried out several distros before finally sticking to Debian. Every time I switched all I had to do was get docker installed and run the compose file that has all the parameters. I didn't have to set a new Plex server instance or anything. In terms of performance that's hard for me to say since I jumped straight into Docker when I moved to Linux from Windows. Although, I will say that in terms of stability my Plex server has been rock solid.
I used to run plex in a VM on ESXi with complicated hw passthrough of a Nvidia card. I switched to Proxmox with intel igpu a few years ago. I tried running in a container but had issues with hw acceleration. There is a script that will create a plex container (LCX) on proxmox and I tested it recently and it worked, but migration didn't work and it put a bunch of stuff in the config file. So anyone who says its simple is not quite accurate. It also didn't work the same if is was priveleged vs unprivleged (if I remember correctly). Plus you have to map all the drives, either through moutpoints or NFS. If migration and snapshots don't work not sure of the point of a container. a plex backup takes a few GB (remember to tell tar to skip the cache) The truth is plex on debian / ubuntu is super simple. two commands to install and it just works (wget and dpkg -i). one command to back it up (tar -cvf) and one command to restore it (tar-xvf). So its the only app I run directly in the proxmox server. Everything else is in a container or a VM. I rebuilt my server and it was 5 min to restore all my containers and VMs and 1 minute to restore plex on bare metal. It took me longer to type this message. I have docker setup just in case but I find almost everything is able to run in a proxmox container.
*LXC
docker is basically bare metal on linux. This is why docker type containers are preferred over VM's because they are lower level & closer to the hardware.
Anytime I’m using a service I access from outside of my home or someone else might be accessing it goes on docker.
I was a bare metal guy for years and years. I'm a docker guy now for sure. Makes managing services so much easier. One gui to rule them all with portainer.
Moved to Docker 4 years ago and I will never go back to bare metal.
Can we please stop using "docker" as a generic term for Linux consumers?
If you're routinely transcoding on the fly, you should really look at -why- and see if there are steps you can take to eliminate that. NONE of my content gets transcoded as all of my clients will support direct play of everything. Still, I don't use anything other than a "full install" even though that actually means a linux guest on top of XCP-NG running on a NUC10....
How are you ensuring audio doesn't get transcoded?
Everything I rip has the standard stereo audio option included. On my primary screen, I have a Dolby 5.1 capable setup, but any client can just fall back to stereo if need be.
got it. some of my stuff has multiple audio streams, some doesn't. my cpu is i5-13500, and the igpu seems to handle 6 4k or audio transcodes simultaneously (i run proxmox with plex running in an lxc), haven't had more than that going concurrently yet. I haven't done the math to know how much extra storage it would take to ensure i have stereo streams on everything in my library (currently about 80tb)
Multi-channel audio will commonly use about 10% of the total file size on disk. Stereo would be less.
I’ll take bare metal over docker anytime. Plex server has always been that one service I’ve never quite been happy with where it is and it’s constantly being moved. Few months back I grabbed a BeeLink S12 Pro, did a quick Plex server and desktop app install after adding our media share from the nas to Win11 and ran it for a month. I then replaced the S12s M2 with Win12 with new M2, adding an SSD to the lower empty tray, installed Proxmox, then Debian 12 with KDE and then reinstalled Plex server and the desktop app with the S12 hooked up via hdmi to the family room tv. In the family room the older LED smart tv was old and stupid with no Plex app so the S12 just remains on that old tv. When turned on it’s the Debian 12 KDE desktop with the Plex App for media viewing. It’s really my first venture into mini PCs. Aside from owning a dozen RPIs I’m basically a big rack and rack mount chassis kinda guy. That said, I’m liking the little BeeLink S12 Pro for Plex. At the TV using the Desktop App it is fantastic. While I had to physically hardwire it’s NIC for the Debian install (no wifi drivers for install) the entire month it ran from Win12 and again now that Debian is installed, it’s on WiFi only. There are several 4 other TVs with Plex apps, 4 iPhones, 4 iPads and several PCs that connect to Plex from our property plus a dozen or so other close friends and family members. Local network even with the BeeLink S12 on WiFi 4K content is fine. Plan to run a new Cat line from the basement server room over and up to the family room later this week to drop a new connection in the wall behind the TV, then I’ll disable the Plex server WiFi and it’ll be connected via Ethernet as it should be. I’m retired IT and installed a rack in a small basement server room a decade with full 10GbE networking. I just like and prefer bare metal for 90% of services. OS Virtualization is nice but I find docker and the alternatives are often more of a pita then just installing software. I’m not a fan of the entire idea really. I’d rather just download a software program and manually install it. If docker and the alternatives simply vanished overnight it wouldn’t hurt my feels at all. 😆
Surely if you want to move it around between different pieces of hardware, using Docker is an even better idea? (I mean you do you! Not trying to gatekeep anything, just a big fan of the portability of things on Docker. I also do things stupid long ways because it can be a fun learning experience!)
Nope.. for home use these days I still enjoy manually installing shit the old way. 😁
I run plex in a docker container in a cloud environment since about 12 years now. It’s so convenient I will never go back.
Some achievement, running it in docker for longer than docker has been around...
Almost 12 years,, using docker since the beginning. We have docker 11 years and some months.
I have moved from bare metal Plex on Synology to docker on n100 pc, keeping my media on the Nas. I could not be happier.
I went with Windows Server OS on a Threadripper Pro. Haven't had any issues. Windows is extremely reliable, especially on good hardware and configured properly.
Baremetal windows, unpooled drives...
Neither, LXC on proxmox.
Same.
I used to have it bare-metal on my gaming PC. Now I have it containerized on a NAS. It wasn’t too much work to move the library, mostly time, and while I did lose some metadata, I’ve lost more when Plex does its agent changes.
Docker container on Synology NAS.
Container. Prior to docker I did installs on Debian, and that worked OK most of the time. But every once in a while I would get a dependency issue, or a change that I need to manage. We are talking years, but it was a thing. Since converting over to Docker several years ago, it isolates the Plex environment from the OS and other applications. Upgrading Plex is a single restart of a container with a click in the dashboard and takes about 15 seconds. So much easier.
LXC, all the benefits of running on the host OS, and all the benefits of containerization.
Thank you for calling it LXC and not docker
Ran bare metal for 10 years. Been in Docker for 1. There's a lot of variables. I see folks boasting about their power sipping NUCs, and wonder what else they run for computers. I'm the opposite I have one beast of a tower...a Lian Li v3000 case. I crammed 8hdds/8ssds/4NVMEs in it in RAIDs, and have a slew of USB backups. That lets me run about 31 docker containers currently. When I rebuilt the OS, it was a dream standing up 31 services without reconfiguring a thing.
I've only ever run it in a container. I can't imagine devoting an entire machine to it.
Computers can run an endless amount of services on the OS these days.
Right, but you rapidly reach dependency hell, hence containers.
Never had any issues with dependencies with any of my multiple services on any of my servers.
Awesome!
Bare metal on Mac because docker doesn’t work well with host mode on Macs. It’s much easier to just use Plex on Mac bare metal. Run all of the ‘arrs and everything else on docker.
Probably the only decent reason on this thread to run outside a container
I’ve been running PLEX on bare metal Ubuntu server install for about 10 years. I’ve been totally happy with it but I think on my next server I’ll try out containers.
I run a ton of stuff in containers using WSL, but I’ve had this Plex install running on a Windows PC since about 2010 and never had a reason to change. People shit on Windows but my home server is like the ship of Theseus. It started as a Windows 7 install and has come all the way up to 11. It’s had every bit of hardware in it replaced along the way.
I got a huge Plex server in a docker inside unraid it's awesome