She/Her - Was bullied off reddit by mean moderators, but it’s a corporation anyway - 🏳️‍⚧️omni, heart - Pro kindness|gressiveness, Anti cruelty|bullshit.

  • 2 Posts
  • 314 Comments
Joined 7 months ago
cake
Cake day: February 23rd, 2025

help-circle


  • As far as I can tell the USA, Britain, Germany, Denmark, Italy, New Zealand, Russia, Australia and China are fucked. Fascism, nationalism, corporate capitalism, segregation, genocide, history erasure, political extremism and regression… Looking pretty fucking bleak. I’d include Palestine but it was formatted by Britain and replaced with Israel, and it doesn’t technically exist like it used to.



  • Self-hosting be like ^^

    I think I had issues similar to that. Perhaps the PiHole is running a conflicting DHCP server? I have my own set of weird issues… Bad connectivity so I need a WiFi range extender, but it’s not a true extender and has its own IP address, acting as a router sometimes and not forwarding DNS queries to the main router… That, a lack of NAT loopback functionality, a lack of changeable DNS settings and the AdGuard Home apparently taking precedent in that side of the house, and I have a cocktail of connection issue bs lol. The main router can DNS perfectly fine, but if I’m connected to the extender I have to add DNS rewrites to AGH… which works for most services…

    The journey is largely about overcoming obstacles aha, and the reward for doing so… Hope yours goes well!


  • Yes! This.

    I have one machine for network sharing storage and thus a user for login and r/w powers. The same storage is used by other machines to save the files, and so each autonomous user for CCTV and qBitTorrent needed to have the same UID as the Samba login, so each program had rw permissions.

    And those containers had to be privileged iirc in order for each root (UID 0) to access the shared storage properly. I may be wrong though


  • I have a router given to me by my ISP, which incidentally has less features than their older model, so I was wondering, if you know: Would some ‘aftermarket’ gateways also be a DNS server? Sometimes it’d be great to have the resolution handled completely by the gateway instead of a separate machine - especially as some of my services just don’t seem to declare their names. And my stock router has a terrible downside - no NAT loopback. And - the reason I’m in this pickle - they’ve removed custom DNS settings.


  • Currently I only have my fiancée on board, but the moment something requires more than setting a custom login domain and user+pass, her patience dwindles. She’s a good baseline for me to know that most people won’t be happy with a manual cryptographic handshake between contacts (Matrix/XMPP) or fucking with IP:port settings. I dont like to damage someone’s feeling of independence but sometimes these things need someone who can blitz through the settings themselves, especially if you have to troubleshoot why it didn’t just work.



  • On your DNS provider, the domain name must point to the public IP of your router. All devices connected to your network use the same public IP.

    On the router, ports 80 and 443 must be forwarded to the (local IP of) the machine running Nginx.

    Nginx must have the domain name point to the local IP of the machine running ABS.

    Are you using Nginx or Nginx Proxy Manager?

    Edit: If Nginx, as they’re on the same docker setup, instead of the Nginx config pointing to ABS’ local IP you can use 172.17.0.1 (iirc) and the port you used, or the container_name from compose.yaml, e.g. audiobookshelf:3748

    I use NginX Proxy Manager so my methods may differ slightly.

    Note about domains: It’s always good to buy one.

    RIP DuckDNS… it used to be a fairly reliable Dynamic DNS (DDNS) and it cost nothing to make an account and five domains… However, apparently, it was shut down without notice under a month ago.





  • For inspiration, here’s my list of services:

    Name ID No. Primary Use
    heart (Node) ProxMox
    guard (CT) 202 AdGuard Home
    management (CT) 203 NginX Proxy Manager
    smarthome (VM) 804 Home Assistant
    HEIMDALLR (CT) 205 Samba/Nextcloud
    authentication (VM) 806 BitWarden
    mail (VM) 807 Mailcow
    notes (CT) 208 CouchDB
    messaging (CT) 209 Prosody
    media (CT) 211 Emby
    music (CT) 212 Navidrome
    books (CT) 213 AudioBookShelf
    security (CT) 214 AgentDVR
    realms (CT) 216 Minecraft Server
    blog (CT) 217 Ghost
    ourtube (CT) 218 ytdl-sub YouTube Archive
    cloud (CT) 219 NextCloud
    remote (CT) 221 Rustdesk Server

    Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:

    Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.


  • An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.

    • Storage also expands when needed, i.e. you can say it can have 40GB but it’ll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage… Until the total usage approaches 100%. So there’s some flexibility. With a VM the storage is definite.
    • Usually a Debian 12 container image takes up ~1.5GB.
    • LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.

    Separating each service ensures that if something breaks, there are zero collateral casualties.