

If you have the room, why not go full ATX? More compatibility with available parts and room for future upgrades! Drives, GPUs, NICs, HBAs etc.
Software engineer (video games). Likes dogs, DJing + EDM, running, electronics and loud bangs in Reservoir.


If you have the room, why not go full ATX? More compatibility with available parts and room for future upgrades! Drives, GPUs, NICs, HBAs etc.


It’s a sign of the times that the effects of rapid weight loss are attributed to a drug since most people don’t know what it looks like!
In my experience it always goes wrong at the least opportune time. Before an important zoom call, as you’re about to leave for the airport etc. My NAS and services (especially Home Assistant) are so mission critical now that I like to have a warm backup ready to go, even if it’s a stop-gap measure.


This, except it’s the CEO being questioned


IMHO RPi is still a good choice for HA. SD cards are cheap enough now that you can have a spare handy with Home Assistant OS already flashed on it, then if/when your current SD card dies, just swap it out and restore HA from last backup. Only takes a few minutes and happens about as often as a hard drive dies.
All depends on how much you value separation of concerns with a proxmox setup.


I DIY’d a PIKVM from an old Raspberry Pi 4 I had lying around for use in a homelab server. It’s been great, no complaints here, very handy if you need BIOS or direct console access from a phone or laptop. I especially like that you can hook up the PC power buttons to allow hard power cycling via the web interface. Though if you’re looking for something portable you’d probably skip that part.


Used enterprise drives are amazing value though. With enough redundancy in a RAID array it’s a great way to get storage in bulk.
I did similar recently! Proxmox has been amazing, I wish I did it sooner. It’s so nice being able to spin up as many containers/VMs as I like, and spread memory/CPU/disk as needed between various appliances. I’ve found Home Assistant a little snappier as well.
Not sure how old your setup is, but if it’s like mine (6+ years old) then a lot of the old ways of doing things via yaml and config files have been moved to the web UI instead.
I’d probably just add functionality slowly and try doing it all the “easy way” first (via the UI), referring to the docs as you go. Treat it as an opportunity to explore all the cool new stuff the team have added!

Am interesting idea - I wonder if this would be beneficial for injury recovery, in the same way pedal assist can be.


Please drink a verification can.


At some point tech companies stopped focusing on what customers want/need, and started chasing their own delusions on what the next big thing is that will make them money. Solutions in search of problems, with billions of dollars of hype and marketing behind them. Crypto, NFTs, the metaverse, AI… it’s sad to see.


Starting to feel like South America needs to form its own NATO against their bully neighbour. Maybe Canada will join them.


Wow, thanks so much for the detailed rundown of your setup, I really appreciate it! That’s given me a lot to think about.
One area that took me by surprise a little bit with the HBA/SAS drive approach I’ve taken (and it sounds like you’re considering) is the power draw. I just built my new server PC (i5-8500T, 64GB RAM, Adaptec HBA + 8x 6TB 12GB SAS drives) and initial tests show on its own it idles at ~150W.
I’m fairly sure most of that is the HBA and drives, though I need to do a little more testing. That’s higher than I was expecting, especially since my entire previous setup (Synology 4-bay NAS + 4x SATA drives, external 8TB drive, Raspberry Pi, switch, Mikrotik router, UPS) idles at around 80W!
I’m wondering if it may have been overkill going for the SAS drives, and a proxmox cluster of lower spec machines might have been more efficient.
Food for thought anyway… I can tell this will be a setup I’m constantly tinkering with.


I’m curious why you feel these are easier to run on bare metal? I only ask as I’ve just built my first proxmox PC with the intent to run TrueNAS and Home Assistant OS as VMs, with 8x SAS enterprise drives on an HBA passed through to the TrueNAS VM.
Is it mostly about separation of concerns, or is there some other dragon awaiting me (aside from the power bills after I switch over)?


The Instapak stuff I’m thinking of was basically medium sized bags that acted like a heat pack, where you break something inside the bag to combine two chemicals then shake it, which makes it expand and harden quickly.


I think the stuff is called Instapak expanding foam. Personally I think I’d remove the GPU and any mechanical drives to play it safe, but I’ve had a PC shipped to me before fully assembled with Instapak around the GPU (no HDDs, only SSDs) and it was fine. Ideally ship it in the original box for the PC case.


[Guide] NAS Killer 6.0 - DDR4 is finally cheap - Builds / [LGA1151,LGA1200] NAS Killer 6.0, Plex QSV Builds - serverbuilds.net Forums
https://forums.serverbuilds.net/t/guide-nas-killer-6-0-ddr4-is-finally-cheap/13956
Hopefully this resource is handy for you - I’m going through the same process at the moment.


Piggybacking off this, it’s worth noting if you’re adding SAS capability to your PC via one of these cards, you can look into used enterprise SAS HDDs for cheap. They’re often sold in bulk - I just picked up 72TB (12x6TB) of 7200RPM drives for AUD480 total. Availability is very region-specific and of course it’s up to you to decide if it’s worth the risk for your needs, but if you’re using RAID6 or equivalent (capable of handling two dead drives at once) the risk is minimal. Be sure to buy from sellers with a warranty (12 months minimum), and check the drives once they arrive. But in general enterprise drives are MUCH more resilient than consumer drives.
So like Beneath A Steel Sky but reverse Uno card.