Started By
Message

re: Any "homelabbers" want to share their setups?

Posted on 6/12/21 at 1:33 pm to
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 6/12/21 at 1:33 pm to
Posted by mchias1
Member since Dec 2009
903 posts
Posted on 6/12/21 at 1:34 pm to
What are you running at home to need that much hardware?

Right now I'm running an old desktop with a single VM on it for home assistant. The host runs my NVR and TV DVR.

I could upgrade to a single server in the future with xeons to completely separate everything and probably still have horsepower to spare.

I've got my network segregated into 4 vlans for main, guest/kids, IOT, and cameras.

Still need to decide if I want to overhead on my edgerouter x to run wireguard or put it on my spare Pi.
Posted by dakarx
Member since Sep 2018
7827 posts
Posted on 6/12/21 at 6:36 pm to
quote:

What are you running at home to need that much hardware?


Anything I want :) Almost....always room for more.

Generally I keep a dozen machines running 24x7 for my day to day household stuff. But on any given day I likely have 25 to 30 vm's running. Depending on what I'm working on, playing with, or whatever the numbers may spool up or down...

Add to that, if the wife is doing her DBA/security studies/work she needs 4-10 machines etc...
(and if she asks, this is my justification for additional hardware or upgrades)
Posted by LSshoe
Burrowing through a pile o MikePoop
Member since Jan 2008
4296 posts
Posted on 6/14/21 at 9:13 am to
quote:

edgerouter x to run wireguard


Wireguard on my erx runs great. Took me a hot minute to get it working, but it was also the first time I had used Wireguard so that could have been part of it.
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 6/27/21 at 4:03 pm to
After putting this project on the back burner for a while, I think I'm ready to start shopping for gear. I'm good on the networking side, but I'm hoping to get some recs on server hardware. My house is older and doesn't really have a good space for this stuff, so unfortunately I'm limited to a 12U shallow (~18" deep) rack at this time.

I think I'm pretty set on one machine dedicated to FreeNAS storage, and another dedicated to proxmox. I would like 8 3.5" hot swap drive bays in the freenas box, and I don't really need top notch performance on either of them. I could easily get by with 8-10 year old server cpu's, as they would probably greatly outperform the single i3 desktop that I currently have running plex, zoneminder, and a few other things.

I think my trouble will be in finding chassis that will fit. Are there any good options/deals on used shallow servers? I've seen maybe one that is 12" deep, and a few that are 15" deep, but it looks like all the deals are on standard depth machines.
Posted by dakarx
Member since Sep 2018
7827 posts
Posted on 6/27/21 at 6:58 pm to
Ebay Link These are the only short depth servers I know of on the secondary market right now...They can be had in a lot of configurations. Not sure what the actual dimensions are, but might give you a starting point. I haven't messed with any of these, but they look to be reasonable.

Honestly for the $$$ a Dell R620 will give you far more bang for the buck, but they are 28" in depth, and R610's are 30". But they generally come with dual power supplies, RAID controllers, RAM slots, etc. Really depends on what you want to do.
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 6/27/21 at 9:12 pm to
Really hard to find used half depth servers, but I did find this company (Plink) that has a pretty nice selection of short depth cases. Linked above is a 4U chassis with 8x3.5" hot swap bays and only 15.25" deep. So I guess I'll end up building my own machine for the NAS.
quote:

Honestly for the $$$ a Dell R620 will give you far more bang for the buck, but they are 28" in depth, and R610's are 30". But they generally come with dual power supplies, RAID controllers, RAM slots, etc. Really depends on what you want to do.
I'm looking at the R220. Only 1 cpu slot, minimal memory slots, single psu, but hey they're pretty cheap and most importantly it'll fit in my space!

I just won't have the space for a standard size machine unless I do some remodeling, but I should have a few free rack spaces so I can cluster 2 or 3 R220's if I need to. Honestly though my needs are pretty minimal, and this proxmox box will be mostly for experimentation. My biggest need at this point is bigger and more resilient storage, so that's where most of the money will be going.
Posted by CarRamrod
Spurbury, VT
Member since Dec 2006
57941 posts
Posted on 6/28/21 at 8:23 am to
Right now I just run a big computer on windows, that hosts my Plex, used to host blue iris, and freenas. I have a mini pc running proxmox that's hosts my Home assistant server and everything it runs internally. Since I got that running on proxmox I have always wanted to convert the big computer to proxmox, but that would make everything a VM. I built this computer for high-end gaming but I have never gotten into playing anything due to other hobbies. And if everything was VM idk how i would get that gaming ability up to my TV or monitor.

I also have another mini pc that I have a brewing automation control software but I don't use it for anything else.
This post was edited on 6/28/21 at 10:14 am
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 2/20/22 at 7:18 pm to
So after putting this project on the back burner for another while, and after giving it a lot more thought, I've decided on a different approach. For some reason I was thinking that separating storage from compute would give me a more stable system, but I finally realized that I was just creating two SPOFs in the system.

So my new plan is proxmox with ceph for storage, running on a few dell r210's or r220's. These are cheap and plentiful on ebay, and they each have two drive bays. I had been dead set on TrueNAS, but pickings are so slim on shallow depth rack mount boxes that hold a lot of drives, plus you have to put out a lot up-front to account for future expansion, plus storage growth comes in big expensive chunks... everything about it started looking less attractive after a while.

So for roughly the same cost as a single box with a lot of storage, I can stack 3 or 4 of these dell units which would take up the same rack space, offer the same amount of usable storage, provide far more compute capacity, be more flexible, and eliminate the SPOFs at the same time.

I *think* this proxmox+ceph cluster setup will give me everything I'm looking for. Lots of cores to distribute loads, the ability to create a large pool of storage spanning many mismatched disks with redundancy, the ability to seamlessly add nodes in the future, resilience to hardware failure with little or no downtime and automatic storage healing. I think I can even add some RPi's to the cluster if I want to add more storage without the extra hardware expense and power consumption.


This will be my first experience with proxmox, ceph, and clusters in general, so I guess once I get going I will report back on how easy/hard it all is, what problems I ran across, what misconceptions I had, and whether I would do it completely different if I were to start over.
Posted by bluebarracuda
Member since Oct 2011
18834 posts
Posted on 2/21/22 at 7:53 am to
I've gone full overboard with my setup at my new house

R620 for my PfSense firewall
HP DL380 G8 for Plex
HP 2000 SAN that I converted to a DAS
Custom whitebox game hosting server
Nicehash Mining box
Spare DL380 G8
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 2/21/22 at 6:46 pm to
quote:

I've gone full overboard with my setup at my new house
Eh, before you know it you'll need to upgrade. I'm feeling better about my most recent plan for this reason because I think it'll make future upgrades mostly painless. I shouldn't have to totally rebuild anything or have any downtime in the process. I should be able to just add a new machine, add it to the cluster, then rebalance my services. Same when I decommission an old machine, should be able to just take it down and keep on trucking.
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 9/1/23 at 1:05 am to
So obviously this has been a long drawn-out project. I've been picking up bits and pieces of gear and knowledge in between other more pressing things, but I've finally got something up and running.
quote:

So my new plan is proxmox with ceph for storage
This is looking very promising so far.
quote:

running on a few dell r210's or r220's
I ordered 2 of the r210's. The first one wouldn't power up so I sent it back. The second one ran for an hour before the power supply burned up. I was already hesitant about using these because I don't have a great place for a rack, and these power issues put me off completely. I ended up buying 4 old T1700 SFF desktops. Spent about $500 total on these and they all run great. Unfortunately, the SFF only has room for 1 2.5" and 1 3.5" drive, or I guess 3 2.5" drives since they have 3 sata ports.
quote:

I *think* this proxmox+ceph cluster setup will give me everything I'm looking for. Lots of cores to distribute loads, the ability to create a large pool of storage spanning many mismatched disks with redundancy, the ability to seamlessly add nodes in the future, resilience to hardware failure with little or no downtime and automatic storage healing. I think I can even add some RPi's to the cluster if I want to add more storage without the extra hardware expense and power consumption.
All of this seems to be accurate. Proxmox and Ceph are fricking amazing, and with the integration it's pretty easy to get going.

For about the cost of a 4-bay NAS, I have a 4-8 drive "NAS" plus a ton of USB ports for more space plus gobs more compute power than a little NAS. I'm using typical triple replicated pools so the data is basically bulletproof. I had an old NUC to use as a 5th node, so I can lose 2 entire machines and the data would still be triple replicated after it repairs and rebalances the data (assuming I have enough free space). I've been testing this extensively during the whole setup and learning process. I will just shut down a machine to add or swap out a drive, and during these times I watch the ceph status as it goes to work rebalancing. Ceph (as in cephalopod) is a truly apt name, as I imagine an octopus moving chunks of data around to make sure it's all safe and sound. I have a handful of test VMs using ceph as block storage, and they didn't notice at all as ceph was doing its thing behind them.

The downside for me currently is performance, but honestly I find it pretty good considering all the things I've done wrong (read: on the cheap). I'm using slow spinners, just gbit (they recommend 10gbit), I don't have a dedicated network just for ceph traffic, and my cluster is very small (performance gets far better with more disks). Still, write performance is about 60MB/s and read performance is about 200MB/s for the VMs within the cluster. Cut those in half if I'm transferring data to/from the cluster.

I can improve performance with any combination of:
1. adding nodes/disks
2. upgrading to 10gbe
3. creating a dedicated network for ceph traffic
4. using ssd or nvme drives in their own pool(s)

Most likely, though, for the foreseeable future this level of performance is adequate for my needs so I will continue building out my frankenstein cluster in a similar way that it started - put whatever storage I run across into whatever box will run proxmox and let it eat.

Next I will try an erasure coded pool. As I mentioned, I'm currently using triple redundant pools so I can only use 33% of my raw storage. EC pools require more cpu, but I have plenty to spare so should be good, then I can use 60+% of my raw storage with a similar level of data durability.

quote:

This will be my first experience with proxmox, ceph, and clusters in general, so I guess once I get going I will report back on how easy/hard it all is, what problems I ran across, what misconceptions I had, and whether I would do it completely different if I were to start over.
To sum up this installment, proxmox makes clustering easy, and its ceph integration makes ceph clustering easy. I haven't run across any real problems yet, and performance is actually better than the low expectations I had (probably skewed by the hardware recommendations I read for datacenters). It took me a long time to settle on a plan, but now I'm pretty certain that I arrived at the right one for me and my needs. My most recent NAS experience was with a 4 bay qnap, and that thing was awful. It was slow, it ate a drive every 6 months, and rebuilds took days.

The advantages with ceph are numerous:

1. it can seamlessly grow beyond the limits of a fixed number of bays
2. data rebuilds are fast and happen automatically and immediately without waiting for a new drive
3. drives can be mixed and matched
4. similar hardware cost can run far more services
5. eliminates multiple single points of failure that exist with a NAS (psu/cpu/ram/nic/mobo/etc)


Really fun so far.
Posted by bluebarracuda
Member since Oct 2011
18834 posts
Posted on 9/1/23 at 7:45 am to
I'm actually doing a pretty significant downgrade of my lab now if anyone wants some really nice hardware for a good price . Moving everything from a full size rack in my storage room to a smaller rack in my closet.

I'm currently running:

(keeping)
- Brocade ICX-6610 24port 10gbe POE switch
- Dell R210 II (firewall with opnsense)
- Whitebox with dual 2697 v3s, 256gb DDR4, 55tb, Quadro P4000 (Proxmox - truenas, plex, seedbox, game server, etc)
- Dell R620 with 2x 2660 v2s, 128gb DDR3, 60tb (onsite backup)

(moving away from)
- Qumulo QC208 with dual 2640 v4s, 128gb DDR4
- Supermicro 6048r with dual 2620 v3, 16gb DDR4
- HP DL380 G8 with 2x 2620 v2s, 256gb DDR3
- Full size rack

I had the Qumulo as my bare metal NAS and the supermicro as my onsite backup. The DL380 was my lab lab

Edit: I'm hoping this weekend I'll get a chance to finally run cabling for my cameras to start setting up those. I also need to swap out my Ubiquiti AC Pro with my EnGenius Wifi 6 AP
This post was edited on 9/1/23 at 7:56 am
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 9/1/23 at 12:50 pm to
That's a whole lot of horsepower that I don't need right now.

The most intensive computing on my roadmap will be surveillance video recording and analysis, and my main objective with this project is to develop software to scale that task out rather than up. Running into the limits of a machine is a real pain in the arse (and wallet), to the point where often the next step is never taken because it's such a budget-buster. Assuming I can make it all work right, a cluster NVR that can grow incrementally would be pretty attractive IMO. I'm sure this exists in these million dollar surveillance systems that I know nothing about, but I need to bring it to the thousand dollar systems.
Posted by Hopeful Doc
Member since Sep 2010
15388 posts
Posted on 9/1/23 at 1:24 pm to
quote:

Proxmox and Ceph are fricking amazing



I went ahead and threw proxmox on an old Optiplex microSFF with an 8500T inside it + 16gb RAM. It’s my Home Assistant box + a few add ons. I added a fedora instance, mostly just to test out proxmox. I’ve been lazy and have my Adguard home instance inside of HA. I tried adding a second instance into Fedora just to play, but it takes a bit of work to configure, and I haven’t been bothered to dedicate the time to it yet. I was going to transition my Plex, Emby (yes, I’m dumb enough to have bought lifetime premiere subscriptions for both), and ChannelsDVR instances to the Fedora side of the box, but those are mostly sitting on my “big rig” rather than getting offloaded, and I ran into a little trouble trying to reference mounted network shares (from my NAS, where the media files sit) inside of Fedora. It works fine on my windows instances, so I just need to relearn it (or get rid of Fedora and use Ubuntu/debian like the other idiots who don’t put enough time in and want to copy other people’s work, but I wanted to dip my toe in the RHEL side to play around. My original introduction to the sysadmin world was a Novell Groupwise server that we transitioned out of about a year later to Windows)

Having only one “mini” pc and, so far, no real need, I haven’t looked into Ceph much, but two questions:
1) is mix/matched hardware compatible (different CPU, mix of intel/amd)?
2) does ceph work without their subscription?




I’m probably not going to migrate away from my synology NAS at home. Sorry to read about your experience with used servers- that certainly adds a layer of hesitancy for my next move, but the number of 192gB RAM with dual-Xeon boxes that already have 8+TB of storage still make it a little tough for me to not want to dip my toe in there for my office setup. I’m even finding some sub-$500 options with active Windows Server installed. I don’t need most of what it has, and about half the machines are windows home (which doesn’t allow domain join/Active Directory services if I remember correctly, but they can join an SMB share just fine). But it’d be hard to surpass the value as a simple NAS without buying lots of other used components, I guess until I take the very legitimate longevity component into account (even the electricity rates will wind up negligible I think). I doubt I’ll have the mental fortitude to migrate everything to RHEL-compatible stuff (I’m small enough that I think I can use the community versions for free and get away with it, but then I have to teach 17 busy users new stuff that is only really going to annoy them without adding much actual value, because very little local network activity that isn’t printing is happening, and that’s actually a pretty strong point in favor of Windows)



Also, can we coin the term RAIC as the generic for Ceph? Redundant array of inexpensive computers?
This post was edited on 9/1/23 at 1:29 pm
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 9/1/23 at 4:07 pm to
quote:

1) is mix/matched hardware compatible (different CPU, mix of intel/amd)?
You can cluster them, but if you want to use the high availability (HA) or live migration features then you should probably make groups of same/similar architectures and assign your VMs accordingly. I think it would probably work regardless (at least kinda-sorta), but for performance reasons the software should be compiled for the target cpu. I think it's even possible to run x86 VMs on ARM, though of course performance is shite. Ceph should work pretty seamlessly regardless of the platform mix. Each node will be running Ceph compiled for its own cpu, and inter-node communication is a few layers above the hardware.
quote:

2) does ceph work without their subscription?
Yep, I think actually everything works without a proxmox sub. You just get a nag button every time you login (and I think there are ways to remove that). You also need to switch to the no-sub repositories for proxmox updates and also choose the no-sub option when installing ceph.
Posted by bluebarracuda
Member since Oct 2011
18834 posts
Posted on 9/1/23 at 4:39 pm to
quote:

You just get a nag button every time you login (and I think there are ways to remove that).


This sometimes does the trick

quote:

sed -i.bak "s/me.updateActive(data)/me.updateCommunity(data)/g" /usr/share/pve-manager/js/pvemanagerlib.js
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 9/5/23 at 1:47 pm to
Update on what I've learned so far about proxmox+ceph and my primary use thus far (plex media).

Initially I ran a fileserver container (TurnkeyFS) which mounted a media storage volume stored on ceph block storage. It worked, but it felt kind of "dirty" that way, too many services stacked up.

I wanted to use CephFS, but for a while I couldn't figure out how to use it. I created a ceph filesystem, but then what? Turns out it was super easy... when you create a ceph filesystem in proxmox, the proxmox nodes just mount it at /mnt/pve/cephfs which you can then bind mount in any container. It's the same filesystem and files accessible from all the nodes in your cluster, backed by ceph. Moved the media out of that locked up volume and into the /mnt/pve/cephfs directory tree, and now I can give any number of containers access to it to add or share files. For example, I have a samba share that I can drop videos into and plex will add them to the library, and of course I have the *arr's putting stuff in there too.


Obviously this is a lot more work than just using a NAS, but for me the flexibility is more than worth it. I'm liking this whole "hyper-converged infrastructure" thing, even at such a tiny scale. It simplifies building a lab when all the pieces are interchangeable and work as a cohesive whole.
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 10/1/23 at 8:58 pm to
I doubt anyone cares but I'll document some of my stumbling blocks and successes here anyway.

Before I really trusted ceph enough to start using it in "production", I put it through the ringer. Kind of on purpose but also kind of because I was learning it and figuring out how it works. To do it "right", you would usually have a lot more nodes than I have, and roughly equal amounts of storage on each node. I have relatively few nodes and at first I had a really large difference in storage per node (like 10x diff). Also, I was dropping out nodes and replacing drives like crazy, turning over close to half my available storage in a single day sometimes. As a result ceph was in a constant state of poor health, always rebuilding and rebalancing. But it just kept working... until it didn't. I pushed it too far and too hard, and at some point some chunks of data were down to only 2 copies waiting to make a 3rd but I pulled a drive before that happened. I'm using 3/2 pools, which means it will maintain 3 copies of all data, and it needs at least 2 copies to continue IO on that data.

So when I was down to just 1 copy of a significant chunk of the pool, ceph halted IO on a lot of it. I didn't know this would happen, and I didn't know that it had happened for a while. All I knew was my VMs had stopped working because their boot disks were stored in ceph and they could no longer be read. And ceph kept getting stuck repairing the data, probably because my smaller disks were near full and it couldn't figure out where to place new copies. Even after I added more storage it just would never repair the data. I could not get it to heal itself.

I chalked it up to a small cluster problem that ceph was not designed for. No "real" installations would ever turn over half its storage in a day. If I had done things more slowly I think it would have all been fine.

So I solved my small cluster problem with a small cluster solution: I just created a new pool on the same cluster of nodes, set the old pool to 3/1 (so that it would turn IO back on having just 1 copy), and copied all the data from the old pool to the new one. And it fricking worked! As far as I can tell, ceph didn't lose a single byte of data. It went into lockdown emergency mode to protect the data at all costs, including the cost of bringing down my VMs. Moving everything to the new pool allowed ceph to place 3 copies of everything where they should go, and all my VMs fired back up and all my media was still intact.

So now I've got VMs with boot disks on ceph block storage, and some of them work with data stored in cephFS. Since everything is available from all nodes, I can migrate a service from one node to another very quickly. There is no waiting to copy a disk image to the new node because the disk image is already there.


Also, in the past, I've had a few ports open on my router to allow access to some stuff. I've always wanted to set up a VPN, but it seemed like a pain and I never had time to look into it. But then I ran across Tailscale, and it is very simple. Just run a simple command to get any machine on the VPN, and it has a reverse proxy to route requests to your various containers while serving a valid certificate. I had not dealt with any of this before, and in a few hours it was all up and running great. Can't recommend it enough.
Posted by Korkstand
Member since Nov 2003
28996 posts
Posted on 10/2/24 at 2:09 pm to
Wow, it's been a year already. Time for an upgrade (or maybe downgrade in a way).

I've been using these SFF machines for my cluster, and while they've been working great, there are a couple of problems. The first is I prefer to use 3.5" drives to keep my $/TB down, but these machines only hold one drive at that size. The other problem is these have way more power than I need, and of course they use a lot of electricity and get pretty hot and loud. My cluster draws probably 200W on average, and while that's not a whole lot it does add up to ~$250/year in electricity. Still not a major expense, but given that the power is mostly wasted I don't want to keep burning money for no reason.

So I've recently found these. It's basically a NAS, but more of a PC in NAS form factor. This gives me two 3.5" drives per node, but in my case the most important metric is my watts per 3.5" drive will drop from 50 to about 5.

I've got one on the way, and when it arrives I'll put proxmox on it, add it to my cluster, decommission one of my old boxes, and let ceph do its thing. I expect it to go well, and assuming it does I'll proceed with replacing the others in the same way. In the end I will have replaced my entire cluster with no downtime of services or storage.
first pageprev pagePage 2 of 2Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram