It seems like there might be exceptions to the “no partial upgrades” which has not been discussed: you can pin your version of the kernel primarily to give time for packages like zfs to catch up to the latest kernel
It seems like there might be exceptions to the “no partial upgrades” which has not been discussed: you can pin your version of the kernel primarily to give time for packages like zfs to catch up to the latest kernel
I’ve never used bcachefs and only recently read about some of the drama. I wish the project the best but at this point it is hard to beat zfs
Here’s my journey from arch to proxmox back to arch: https://bower.sh/homelab
I was in your shoes and decided to simplify my system. It’s really hard to beat arch and I missed having full control over the system. Proxmox is awesome but it felt overkill for my use cases. If I want to experiment with new distros I would probably just run distrobox or qemu directly. Proxmox does a lot but it ended up just being a gui on top of qemu with some built in backup systems. But if you end up using zfs anyway … what’s the benefit?
IP and copyright are both tools used to control individuals, not corporations. We are seeing the reality in realtime with LLMs disregarding them wholesale.
I’ve been slowly working on a set of decoupled services that could replace some aspects of GitHub.
https://pr.pico.sh/ — a pastebin supercharged for git collaboration.
https://pgit.pico.sh/ — static site generator for git repos.
Both are still WIP but I think they are pretty handy
If you want low effort high value then get a synology 2 bay. If you want full control over the host OS then run Debian/arch with zfs
I didn’t use any of the terms you used in your post. I’m not using those products in part for the reasons I discussed but also I don’t see it particularly useful beyond a cult of personality building it.
I used shotcut for light video editing and it worked great, no complaints
I went down a similar path as you. The entire proxmox community argues making it an appliance with nothing extra installed on the host. But the second you need to share data — like a nas — the tooling is a huge pain. I couldn’t reliably find a solution that felt right.
So my solution was to make my nas a zfs pool on my host. Bind mounting works for CTs but not VMs which is an annoying feature asymmetry. So I decided to also install an nfs server that exposed my nas.
I know that’s not what you want but just wanted to share what I did.
The feature asymmetry between CTs and VMs basically made CTs not part of my orchestration.
Here’s my homelab journey: https://bower.sh/homelab
Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet
It is being rewritten using swift
I’m of mixed views about this. Omarchy is popular purely because of DHH. I don’t see anything of benefit beyond the notoriety of a famous dev.
There’s also some dissenting opinion about DHH in general that taints the project: https://drewdevault.com/2025/09/24/2025-09-24-Cloudflare-and-fascists.html
Being based on hyprland also has some potential social issues.
I don’t get why cloudflare didn’t donate to arch instead.
While not the same I use an rss-to-email service that hits the minimal sweet spot for me
https://pico.sh/feeds