Making a home server
I have been using a custom debian for almost 10 years to have a samba server and simple backup system using cron jobs that with time, started to do lots of other things. The setup is not really that complex or automated, but with time I am proud to say that I can have it working in a matter of an afternoon or a bit more of setting up, thanks to the sh scripts I created.
It has been working relativelly fine and giving me just some work to maintain, but when it does, it is really something
that puts me in deep pain and wastes a lot more time than I`d want, I HATE IT.
It is still much better than not having anything, but I´d like to make this more robust and reproducible. Living in Brazil limits my options and makes it VERY hard and expensive to buy good parts. I´d love to get myself a THREADRIPPER or an EPYC cpu, but I would pay 2.5 times the price. Yeah, living here is like a perpetual state of the GPU shortage we went through the pandemic, but for every gadget I´d like to get my hands on. This is the reason why I use consumer-grade parts, which in turn makes me have to do more maintenance than I would like.
So, during the last server breakdown caused by using a cheap mini-pc as the server, I started talking to my friend HP, core godot contributor and incidentally, my boss at prehensile-tales and they gave me the idea of just using containers for EVERYTHING, and with of course a more in-depth conversation they showed me that, provided I took time to set it up correctly, I could drastically reduce the downtime and work needed to maintain or reinstall everything. They also gave me clever tips for backup which I frankly would not have thought but were so simple to integrate and had such impact that I did it basically on the spot.
Choosing the distro was simple, Fedora workstation has been my main OS for more than a year, god is my withness I even learned to like gnome thanks to that, which was something I haven´t touched since the bad days of gnome 3. Fedora had sane defaults on desktop so I decided to use fedora server for my server.
What you will read here is the journalling of the process that started around April and frankly, has not finished to this day, but I´d like to leave a written registry for Bruno of the future and if this helps anyone in a similar situation, hey, that is great!
Requirements & Reasoning
Why do I have a server? Why shouldn´t I have a server? It is a cool project, it can store my samples, recordings, old college assignments... it is my vault and a machine that works for me. But the non-tldr version is that it started because of cd rot on two of my favorite dreamcast games way back in 2003~4. From that point forward, I waste no time in having a private collection of my physical media... I take no time at all to rip in the most accurate way possible what I bought so I can have control of my things. Around the same time I started learning about compression algorithms, codecs and all of that jazz. Nowadays I have a good understanding and well, even before finishing high-school I had my simple debian DIY ATX server built as cheaply as possible and god only knows how my data survived throughout the years. Since 2014, my needs evolved a lot and, although I never went over 2TB of data, thanks to work I need to worry about resilience and backup strategies. I also started doing some automation in my home and I hate the fact that I depend on the cloud for that so that pushed me a bit more to start the project where I started.
Having said that, here are the key points of my requirements:
EVERYTHING in containers
Stuff will break and nothing feels better than simply killing a container or completely cleaning the state of it and having a new working one in seconds.
No dependency on cloud services
I care about my privacy
that is why I use chrome and I like to understand why and where my things are where they are. This is why I still buy dvds and blu-rays in 2023 and let me tell you, nothing beats having the phisical media of something you like that may or may not have gone the way of the dodo on that streaming service. I have a small collection of movies and series I have bought throughout the years that I`d like to move with me no matter the machine.
Not too much complexity and specialized tools
I have already used portainer on my server but I have been in situations where I felt it limitting more than helpful. Where are the damn configs files of the tool? Where are the compose files I pasted in the webui to create my containers? Why go through the initial setup and have to download and later upload a backup if needed? I have also used webmin for years and there were lots of things on it that I never touched. It felt like killing a fly with an atomic bomb. This is why, this project will prefer CLI over any gui (also because of automation).
Use the tools fedora already provides and try to keep the usage of external tools (that are not on the official repos) to a minimum
For quick deployment of systems both for work and for testing purposes. Last thing I was playing with during my free-time was nix and nixos on a laptop. It would be ideal to have it on a vm that I can access on my network instead of that only machine, saving snapshots and whatnot would benefit my learning process...
I'm fairly used to BTRFS and can´t live without its snapshot feature. I´d like to keep using that.
My old nas (an odroid H2+ sbc) used to take 24w while idling and 35w on average when I was doing stuff with it. I will go with more power for virtualization but as I will be using a fairly recent ryzen cpu, I expect to not even double that power consumption.
- Gigabyte Aorus B450m
- Ryzen 5 5600G (later I do plan on switching for the GE variant for ECC support.)
- 128GB of Kingstom Fury DDR4 @ 3200Mhz (non ECC)
- 2x HP SSD EX900 1TB
- 2x 8TB Seagate IronWolf NAS HDDs
Where I started
My idea is:
- The two 8TB HDDs will work on the built-in btrfs raid1 mode.
- The first of the two HP 1TB ssds will be for the root partition.
- The other 1TB ssd will be for the virtual machines, qcow2, containers and volumes.
I will create btrfs subvolumes on the raid1 hdds to store the home partition, snapshots and the logs, this is all to help me on my backup strategy which will come later.
How to achieve that? Frankly, it was not during installation... I actually did the most basic install of fedora workstation, the only thing I changed was making sure it used btrfs for everything. Later on the very first boot, through ssh I edited my fstab to properly mount those partitions the way I devised on this table. This table is actually the end result.
Why go over this trouble?
Because of snapshots! They are too good of a feature to not make use of. We need to separate the snapshot subvolume from the other subvolumes you want to backup. As the only thing I am interested is basically the home folder, this is the reason @snapshots and @home are siblins in the filesystem structure. With that, later I can use btrbk to make automated backups to an usb disk or another drive on one of the vacant hotswappable bays basically worry-free and unattended.
BTRFS has allocation profiles for different RAID modes that you can use. Have a read at this article if you want to know more, what interests me is that it has some key benefits over hardware raid, including in-place conversion. Which is actually the reason why I decided to do RAID1 and not RAID5 as I only have two disks for that on this server. This is no problem as I have a NAS with a more robust setup which makes backups of the server every two days, but if I do plan on making this even better down the line, I have the option to convert my raid1 into raid5 on the spot.
The syntax to create a RAID1 profile with your disks is very simple:
mkfs.btrfs -mraid1 -draid1 /dev/sda1 /dev/sdb1 -L data
Per man page, you need to specify the profile for the metadata and the data, do that with the
-d flag respectively. Follow those with the disks or partitions you wish to use for the raid and
-L to specify the disk label.
After that, you can mount any of the disks on your filesystem and use your raid-enabled filesystem.
BTRFS has an awesome functionality, called subvolumes. For all effects and purposes they are directories, but they have the power to be mounted as partitions on fstab... this is not the technical talk of course. I suggest reading the awesome BTRFS article on the arch wiki which was what helped me on this.
The fact that subvolumes can be used as partitions for mount is not a new thing of course, but I just used subvolumes exclusively for backups and never really touched this feature of the filesystem.
I mounted the sda1 raid under /mnt/data so I could start creating the subvolumes and followed the
@ notation for the subvolumes on the root of /mnt/data by doing:
sudo btrfs subvolume create /mnt/data/@home sudo btrfs subvolume create /mnt/data/@snapshots sudo btrfs subvolume create /mnt/data/@var_log
Next I needed the UUID of the partition where I created those subvolumes, I can do that easily with blkid like so:
[root@fedora-server]# blkid /dev/sda1 /dev/sda1: UUID="68490e94-b2e6-4c7f-9cb2-d25bcb6f31d9" UUID_SUB="d5b6aa44-1632-4d10-86f9-9a9d01762cfe" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="0f18c33d-b627-7742-8739-399abff3f4f7"
I also did the same for the partition responsible to store the containers and volumes.
With that info, I manually edited my fstab to mount the subvolumes and set everything up correctly.
UUID=68490e94-b2e6-4c7f-9cb2-d25bcb6f31d9 /mnt/data btrfs 0 0 UUID=1cc8b713-e980-4017-ae3e-9098d8ae97f4 /var/lib/containers btrfs subvol=var_lib_containers 0 0 UUID=68490e94-b2e6-4c7f-9cb2-d25bcb6f31d9 /home btrfs subvol=@home,compress=zstd:1 0 0 UUID=68490e94-b2e6-4c7f-9cb2-d25bcb6f31d9 /.snapshots btrfs subvol=@snapshots 0 0 UUID=68490e94-b2e6-4c7f-9cb2-d25bcb6f31d9 /var/log btrfs subvol=@var_log,compress=zstd:1 0 0
As you can see, we put in fstab the UUID of the disk that contains the subvolume we want to mount adding the specific subvolume name to the mount option subvol. After editing the file, it is wise to do
systemctl daemon-reload to update the systemd units that are generated from fstab. Just reboot and you should have a machine ready to do the real fun stuff.
Later on I will configure my containers, vms and explain my strategy on its usage on my workflow.