

Also of note - if you’re using docker (and Linux), make sure the user is/group id match across everything to eliminate any permissions issues.
— GPG Proofs —
This is an OpenPGP proof that connects my OpenPGP key to this Lemmy account. For details check out https://keyoxide.org/guides/openpgp-proofs
[ Verifying my OpenPGP key: openpgp4fpr:27265882624f80fe7deb8b2bca75b6ec61a21f8f ]
Also of note - if you’re using docker (and Linux), make sure the user is/group id match across everything to eliminate any permissions issues.
Not really, but I can give you my reasons for doing so. Know that you’ll need some shared storage (NFS, CIFS, etc) to take full advantage of the cluster.
I hope that helps give some reasons for doing a cluster, and apologies for not replying immediately. I’m happy to share more about my homelab/answer other questions about my setup.
Those are beasts! My homelab has three of them in a Proxmox cluster. I love that for not a ton of extra money you can throw in a PCIe expansion slot and the power consumption for all three is less than my second hand Dell Tower server.
Sorry, I wasn’t clear - I use PowerDNS so that I can more easily deploy services that can be resolved by my internal networks (deployed via Kubernetes or Terraform). In my case, the secondary PowerDNS server does regular zone transfers from the primary in order to ensure it has a copy of all A, PTR, CNAME, etc records.
But PowerDNS (and all DNS servers really), can either be authoritative resolvers or recursors. In my case, the PDNS servers are authoritative for my homelab zone/domain and they perform recursive lookups (with caching) for non-authoritative domains like google.com, infosec.pub, etc. By pointing my PDNS servers to PiHole for recursive lookups, I ensure that I have ad blocking while still allowing for my automation to handle the homelab records.
This is overkill.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can’t recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: “My setup is overkill” rather than “This is overkill” - the Op is asking a very valid question and the passive phrasing of my post’s first sentence could be taken multiple ways.
I put my Plex media server to work doing Ollama - it has a GPU for transcoding that’s not awful for simple LLMs.
Hosting on the public web isn’t too crazy - start with port forwarding on standard ports (443 for sale/web) and add in a dynamic DNS address.
More than likely your residential ISP doesn’t change your IP that often, but Dynamic DNS solves that problem before it hits. I use Cloudflare, but mostly because I’m lazy and haven’t moved off of them after their most recent sketch behavior.
Sure! I mostly followed this random youtuber’s video for getting Wyoming protocols offloaded (Whisper/Piper), but he didn’t get Ollama to use his GPU: https://youtu.be/XvbVePuP7NY.
For getting the Nvidia/Docker passthrough, I used this guide: https://www.bittenbypython.com/en/posts/install_ollama_openwebui_ubuntu_nvidia/.
It’s working fairly great at this point!
I spun up a new Plex server with a decent GPU - and decided to try offloading Home Assistant’s Preview Voice Assistant TTS/STT to it. That’s all working as of yesterday, including an Ollama LLM for processing.
Last on my list is figuring out how to get Home Assistant to help me find my phone.
This is the way. Layer 3 separation for services you wish to access outside of the home network and the rest of your stuff, with a VPN endpoint exposed for remote access.
It may be overkill, but I have several VLANs for specific traffic:
There are two new additions: a ext-vpn VLAN and a egress-vpn VLAN. I spun up a VM that’s dual homed running its own Wireguard/OpenVPN client on the egress side, serving DHCP on the ext-vpn side. The latter has its own wireless ssid so that anyone who connects to it is automatically on a VPN into a non-US country.
Restic to Wasabi S3.
For the nginx reverse proxy - that’s how I ran things prior to moving to microk8s. If you want I can dig out some config examples. The trick for me was to set up host based stanzas, then update my internal DNS to have A records for each docker service pointing to the same docker host.
With Kubes + external-dns + nginx ingress, I can just do a deployment/service/ingress and things automatically work now.
I love my Synology DS1618 - it’s a bit older now, but the 10Gbps is a delight.
I want to move my 4x SFP+ from their current MicroTik switch to my new Brocade. Then I’m very strongly debating running both VM and Ceph over the same 10Gbps connections, removing the ugly USB Ethernet dongles from my three Proxmox Lenovo M920q boxes.
After that? Maybe look at finally migrating Vault off my ClusterHat to Kubernetes.
Ceph is… fine. I feel like I don’t know it enough to properly maintain it. I only went with 10gbe because I was basically told on a homelab reddit that Ceph will fail in unpredictable ways unless you give it crazy speeds for it’s storage and network. And yet, it has perpetually complained about too many placement groups.
1 pools have too many placement groups
Pool tank has 128 placement groups, should have 32
Aside from that and the occasional falling over of monitors it’s been relatively quiet? I’m tempted to use use the Synology for all the storage and let the 10GbE network be carved up into VM traffic instead. Right now I’m using bonded USB 1GbE copper and it’s kind of sketchy.
To be fair - both synologies are running big spinny NAS drives - I could reduce my capacity and my power usage by going with SSDs, but shockingly, I can’t seem to figure out what to cull in the 35TB combined storage.
I am debating moving my Vault cluster from a Clusterhat to pods on my fresh kubes deployment - and if I virtualize Pihole, that would also reduce some power consumption. Admittedly, I’m going overboard on my “homelab” - it’s more of a full blown SMB at this point with Palo firewall and brocade 48p switch. I do infosec for a living though, and there’s reason to most of my madness.
Unfortunately, no - not specifically. I want to get a kilawatt monitor at one point. The best I can do is share my UPS’s reported power output - currently at around 202-216W, but that includes both my DS1618 and the DS415+ along with my Ubiquiti NVR and two of my Lenovo M920Qs.
I should probably look at what adding the 5 bay external expansion would take power wise and maybe decommission the very aged 415
Edit: this is also my annual reminder to finally hook up the USB port on my UPSs to… something. I really wanted to get some smart - “Oh shit there’s a power outage and we’re running low on reserves, intelligently and gracefully shut things off in this order”, but I never got around to it.
This is basically my homelab. Synology 1618 + 3x Lenovo M920Q systems with 1TB names. I upgraded to a 10gb fibre switch so they run Proxmox + Ceph, with the Synology offering additional fibre storage with the add on 10gb fibre card.
That’s probably a few steps up from what the OP is asking for.
Splitting out storage and computer is definitely good first step to increase optimization and increase failure resiliency.
I bought a car that comes with a “free” 300k/30 year warranty, but only if to do oil changes every 4k miles or 3 months. Maybe this guy has something similar?
For me, I may try And keep it up for a bit, but driving to one particular dealer every 3 months just to get a ridiculous warranty that will probably never actually pay out isn’t worth it.