

Do you version your compose files in git? If so, how does that work with the dockGE workflow?


Do you version your compose files in git? If so, how does that work with the dockGE workflow?


I highly recommend you use Proxmox as the base OS. Proxmox makes it easy to spin up virtual machines, and easy to back up and revert to backups. So you’re free to play around and try stupid stuff. If you break something in your VM, just restore a backup.
In addition to virtual machines, Proxmox also does “LXC containers” , which are system level containers. They are basically a very light weight virtual machine, with some caveats like running the same kernel as the host.
Most self-hosting software is released as a docker-image. Docker is application level containers, meaning only the bare minimum to run the application is included. You don’t enter a docker container to update packages, instead you pull down a new version of the image from the author.
There are 3 ways to run docker on Proxmox:
The “overhead” of running docker inside a VM on the host is so negligible, you don’t need to worry about it.


I had never heard of dockge before, but this sounds like the killer feature for me:
File based structure - Dockge won’t kidnap your compose files, they are stored on your drive as usual. You can interact with them using normal docker compose commands
Does that mean I can just point it at my existing docker compose files?
My current layout is a folder for each service/stack , which contains docker-compose.yaml + data-folders etc for the service. docker-compose and related config files are versioned in git.
I have portainer, but rarely use it , and won’t let it manage the configuration, because that interfered with versioning the config in git.


The article introduction is gold:
In the unlikely case that you have very little RAM and a surplus of video RAM, you can use the latter as swap.
I’m using NixOS with KDE for HTPC, though I’m not sure I’d recommend it unless you’re eager to learn Nix.
The upshot is that it’s super stable, and everything is declared and versioned in the git repo, including my lirc device codes and node-red automation flow for lirc and mqtt. (The HTPC shows up as a mqtt switch in home assistant for turning on and off, and the HTPC turns on or off the TV and amplifier through IR as the PC turns on or off)
I mostly use Firefox and various streaming sites for video, and Spotify desktop client for music. A gyro mouse/keyboard is the main input device, plus wireless Xbox360 controllers for streaming games with Moonlight (from Flathub)


That’s hilarious :D
What It IS
A real DOOM port - Uses the actual DOOM engine via doomgeneric, playing real levels with real game logicWhat it ISN’T:
…
A practical use of your PCB editor - This serves no purpose other than being delightfully absurd


Ugh indeed! Discord is an information black hole, where information enters never to be found again by search engines or even its members
I can understand replacing IRC with Discord, but using Discord as a forum is madness


The other day i necrod a nearly 3 year old forumthread with some new information. A few hours later the person from 3 years ago came back and thanked me because the new information helped them. Sometimes nercomancy is good :)


Also its probably a lot easier to handle for performing testing on equal conditions through the tests for all their cards. Sounds like ideally they want to freeze all their versions for at least a quarter of a year or more
I think Mint does this out of the box, but check if Timeshift is set up for automatic backups. It’s meant for system-level snapshots (basically everything outside the HOME-folder), so you can easily revert if an update or something breaks the system.
Also consider some form of periodic external backup of her files and documents in the home folder, to protect against hardware failure.


Oh good point, these are modern times, exFAT is a thing now.
I remember years ago having issues that Ubuntu could mount exFAT, so avoided it ever since. But that was many years ago, with an old kernel.


Word of warning on “Safe removal” of external harddrives: You really want to click “Eject” or “Safe removal” every time before unplugging. This is much more important than on Windows, due to the way Linux handles buffers and caching. A copy operation will be “finished” but still live in the write-cache and not securely written to disk.
NTFS is no problem (But as mentioned earlier in the thread the permission system is different). I usually format all my external devices with NTFS so they’ll work on both Linux and Windows machines without any fuss.


Sounds a lot like EEA (Iceland Norway Lichtenstein)


It’s not, unless you know the keys.
Keys are created by the software/app made by the service provider, like WhatsApp / Meta or Google. How is the key created, and is a copy sent back to WhatsApp? “Securely” and “No” they claim, and you just have to trust them.
That can change if WhatsApp need to comply with new laws.
Signal is a bit different because of the app is fully open source, so the code can be audited to verify the integrity of the encryption. They would still need to comply with laws or exit that market, but whatever they do would be 100% transparent.


Thanks for sharing! TIL about autofs. Now I’m curious to try NFS again.
What’s the failure mode if the NFS happens to be offline when PBS initiates a backup? Does PNS try to backup anyway? What if the NFS is offline while PBS boots?
EDIT: What was the reason for bind mounting the NFS share via the host to the container, and NFS mounting from NAS to host?
I did the NFS-mount directly in the PBS. (But I am running my PBS as a VM, so had to do it that way)


I run PBS as a virtual machine on Proxmox, with a dedicated physical harddrive passed through to PBS for the data.
While this protects from software failures of my VMs, it does not protect from catastrophic hardware failure. In theory I should be able to take the dedicated harddrive out and put it in any other system running a fresh PBS, but I have not tested this.
I tried running the same PBS with an external NFS share, but had speed and stability issue, mainly due to the hardware of the NFS host. And I wasn’t aware of autofs at the time, so the NFS share stayed disconnected
SuSE is one of the two major enterprise Linux distributions, with RedHat being the other. I would assume servers make up the bulk of their business, but they provide desktops too.
RedHat is probably better known to most end-users, due to their Fedora community distribution, and their heavy involvement in Gnome.
SuSE’s community distribution is openSUSE
EDIT: Fittingly, the very top of their website says “Make your old Windows 10 PC fast and secure again!” and links to https://endof10.org/


Divide Germany you say? I feel like that’s been done before


“Package” as in taking the raw chip and making it a finalised electronics component, as suggested here?
There’s nothing wrong with Mint, it’s solid. If it works for you don’t stress about it
The only thing is that it’s based on Ubuntu LTS so it’s packages can be a bit old. Doesn’t really matter much unless you have very new hardware and need the hardware support. Then something Fedora based like Bazzite would be better.
For getting newer software you can use flatpak/Flathub.
Bazzite is also “immutable” which makes it harder to break on a system level, but also harder to tinker on a system level. Mint is a “normal” distribution in that regard. Mint does have Timeshift for taking system level snapshots, on the off chance that an update or your tinkering breaks something. Its worth checking that Timeshift is set up for automatic snapshots