• 1 Post
  • 636 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • I recommend Librewolf, it’s a lot more privacy-aggressive out of the box, and you can turn that down a little bit if you need, but otherwise it’s just a more trustworthy Firefox fork as far as I’m concerned. It supports Firefox sync as well (which is telling, because Librewolf takes privacy very seriously and isn’t going to provide too many easy opportunities for you to completely compromise it) Like the other person said sync is E2EE and the hosting server has zero-knowledge of any of your unencrypted data. If Librewolf trusts it, I trust it, and I think you can rest assured that with Librewolf, it’s probably never going to be sabotaged either, which as you imply, is not necessarily true with Firefox.

    I don’t recall whether they use Firefox’s sync server directly or if they have their own, but either way, like I said, the server has no knowledge of or access to your unencrypted data.


  • I’m not a super-expert but I suspect it’s probably still holding open the stdin and stdout file descriptors of the parent process. Try using &> /dev/null to throw them away and see if that helps. You could also try adding nohup in front of the npx, which does some weird re-parenting jazz to prevent the child process (npx) from actually being attached to the parent process so that it doesn’t get auto-closed when the parent exits, which is kind of the opposite of your problem, but it might also help in this case.

    Another possible option is using systemd-run --user <command> which effectively should make it into sytemd’s problem



  • Most of the countries in the western world have spent so long not really being at risk of being at war that we really have no idea how to react to potentially actually being at war. We are so incredibly unprepared in such incredibly profound ways. Imagine being in a war and not having anti-air defenses around your most important strategic nuclear sites and having to rely on troops shooting at incoming aircraft with what I suspect were simply their service weapons, and almost certainly not even dedicated anti-drone weapons. Yes, drones are sort of new, that’s not really an excuse. New things will happen during a war. You have to be able to react quickly to defend your critical assets at a moment’s notice. The fact that we’re still not doing that properly is a perfect demonstration of how far behind the curve we really are.

    I hope this changes soon with the sprawling investments being directed towards defense budgets, but I remain unconvinced, will it just result in more hyper-capable, hyper-expensive techno-wonderweapons? It’s the cheap, good-enough, high-supply things that are currently threatening us, and both history and the present seem to tell us it’s usually the cheap, good enough, high-supply things that both win wars and enable effective defense. Spending money seems like it would imply seriousness, but I don’t think we’re actually taking this seriously enough, yet. When you really get serious about war and defense you need to be asking the real questions about what it’s going to take to win, not just throwing money at the problem.

    Maybe I’m wrong, maybe they’re just sandbagging and waiting for the right moment to reveal our true defensive preparations, but I know a lot of people in various western militaries, and I honestly don’t think so at all, and neither do they. If we are more prepared than we look, it’s a pretty goddamn well-kept secret.



  • It’s very unlikely you are infected by anything unless you were using some crazy settings or addons, or unless you were hit by some extreme 0-day exploit that hasn’t become widespread yet. Firefox does not and normally cannot execute files it downloads automatically nor are videos a likely risk for remote code execution now that we have technologies like data execution prevention built into processors, if you’re attacked by malware it will rely on some other vector or trickery to get you to execute the file. I would expect that your performance issues are unrelated, but you should also check Firefox’s addons and extensions as well as your task manager startup tab to make sure nothing has obviously been installed without your knowledge.

    One thing that sticks out at me is the fact that you only mention the file’s “title” and if you haven’t already you should make sure Windows Explorer is set up to ALWAYS show full file extensions, that’s like a basic safety measure that really should be on by default but isn’t, and it’s really mandatory if you’re messing around on the darker parts of the web. You have to know what kind of file extension it is because that affects what Windows is going to do with it, and when it’s supposed to be one thing and Windows is going to do something different with it that’s a huge red flag that it’s malware trying to trick you into running it.

    You can upload the file to virustotal if you want to scan it but it doesn’t sound likely that it even ran unless you did something bad by accident.


  • The first problem would be the height of the intervening terrain and even if you could overcome that, you still have to contend with friction inside the pipe which is a factor most people don’t think about for short distances but when you start trying to carry water long distances through a pipe, friction becomes massive. An ideal siphon inside an ideal pipe is simply a question of height between source and destination. However in the real world, a siphon isn’t unlimited or ideal. There is a height it can’t overcome and it’s actually not very high at all, geographically speaking. The maximum height of a siphon is only around 10 meters. The terrain between the Red Sea and the Dead Sea is pretty flat, but it’s probably not that flat. I’m not going to pretend I’ve done a precise survey of potential routes, but I’d expect there’s probably some bumps in elevation along the way that’s realistically going to need say, 100 meters of lift to overcome. But even 11 meters would simply end the conversation. There’s simply no way around that for a siphon.

    The reason for this height limitation has to do with the atmospheric pressure required to keep the water liquid, because once it no longer has enough pressure on it to keep it liquid, it simply vaporizes before it reaches the height it needs to and the siphon is broken before it even starts. In a vacuum, at standard temperature, water instantly vaporizes. The external atmospheric pressure (which is acting on the entire water column up to its highest point, to get it over the hill) is all that is keeping the water in its liquid form inside the siphon. The higher you go, the more work that external pressure is doing, and eventually the weight of the water column exceeds the pressure at the bottom of the water column and again, the siphon breaks.

    The friction is the other problem. Even if you could limit your route to no more than 10 meters above the Red Sea, you’re also asking the siphon to not only lift it to that height, but also carry that water through 200 kilometers of pipe or more. We don’t think of pipes as having friction, but they do, and it’s very significant at those distances, especially when your power source (gravity, in this case) is already operating near its absolute limits due to the height problem we already discussed. What you hoped would be a gusher of a siphon will end up being a trickle, if anything at all, with most of the water just sitting idle in the pipe to maintain the siphon while a little dribbles its way slowly through to the destination.

    Finally you’ve got all kinds of other more obscure effects at play at those scales, like water’s surface tension, variability of flow rates, possible pinhole leaks in the pipe that will introduce air, offgassing of dissolved gases in the water or even from the pipe itself, and temperature gradients inside the pipe. All of these are going to play havoc with the ability to form and sustain a reliable siphon.

    In short, siphons are actually pretty limited, we don’t see much of those limitations on the small scale, but on the larger scale of this project those limitations become very serious, very quickly and basically remove the possibility of using a siphon for any realistic practical water relocation project. Almost all of those go away very quickly when you pressurize the system with a pump instead of relying on atmospheric pressure alone. It’s a fun thought experiment, but in practice a simple electric pump turns out to be a pretty cheap way to solve a lot of otherwise really complex hydrodynamic problems, and when that’s the case, it’s not really worth teasing out a solution to those problems with all kinds of complicated engineering. Just throw a pump at the problem and call it a day, job done.




  • You may not agree with what the military does, but you have to respect them for that reason alone, above all else.

    This premise must be rejected. You do NOT have to respect them for that reason alone, and certainly not above all else.

    Did Nazi soldiers deserve respect, because they were just following orders and they followed those orders and what options did they really have? Were they not also facing the potential of harsh punishment if they did not?

    Not having good alternative options is not an excuse for following orders you know are wrong. Respect is earned when your morality supersedes your orders, despite the potential (and sometimes very real and significant) punishments. Your intentions only get you so far, eventually you need to act or else any remaining respect for you will be gone.





  • Looks really nice and seems like it should be a great foundation for future development. Personally I can’t lose Nextcloud until there are sufficiently featureful and reliable clients for Linux, Windows, Android that synchronize a local copy and help manage the inevitable file deconfliction (Nextcloud Desktop only barely qualifies at this, but it does technically qualify and that represents the minimum viable product for me). I’m not sure a WebDav client alone is enough to satisfy this criteria, but I am not going to pretend I am actually familiar with any WebDav clients so maybe they already exist.


  • I wonder if some kind of pledge system would work. Similar to Patreon, but instead of paying immediately or paying monthly, you simply make a pledge that if such-and-such creator starts uploading content to such-and-such free platform, then your pledge for $x amount goes through (either monthly or one-time). There’s no actual money changing hands or any actual commitment to do so, until and unless the creator in question comes onboard. Sort of like a bug bounty, multiple people could make individual pledges to build up a pot of money, that the creator could then cash in. I can see some avenues for potential abuse, and of course people can just cancel once “mission accomplished”, and you’ve got to expect some level of that happening but assuming creators and their community have a good and supportive relationship and really are trying to support each other I don’t really see that being too much of an actual concern.

    We need something that helps make the case to creators that there is a real market for this, that there is a path to being compensated for their work, and that those parts of their community that are interested in this will still help support them. It doesn’t even necessarily have to be directly competitive with Youtube, at this point, we’re not going to collect millions of dollars. But it shows that there is potential money being left on the table, and even if it’s only a little bit of money, it’s not much work for a little bit of extra money, and the creator will ultimately have to decide whether that work is worth the extra money.

    Right now, there’s no guarantee at all, and in fact there usually is no financial benefit. They’re just guessing whether there might be some money down the road for them on alternative platforms, and that’s a pretty tough sell for anybody, nevermind somebody making millions already.


  • You’re on the right track. Like everything else in self-hosting you will learn and develop new strategies and scale things up to an appropriate level as you go and as your homelab grows. I think the key is to start with something immediately achievable, and iterate fast, aiming for continuous improvement.

    My first idea was much like yours, very traditional documentation, with words, in a document. I quickly found the same thing you did, it’s half-baked and insufficient. There’s simply no way to make make it match the actual state of the system perfectly and it is simply inadequate to use English alone to explain what I did because that ends up being too vague to be useful in a technical sense.

    My next realization was that in most cases what I really wanted was to be able to know every single command I had ever run, basically without exception. So I started documenting that instead of focusing on the wording and the explanations. Then I started to feel like I wasn’t capturing every command reliably because I would get distracted trying to figure out a problem and forget to, and it was duplication of effort to copy and paste commands from the console to the document or vice versa. That turned into the idea of collecting bunches of commands together into a script, that I could potentially just run, which would at least reduce the risk of gaps and missing steps. Then I could put the commands I wanted to run right into the script, run the script, and then save it for posterity, knowing I’d accurately captured both the commands I ran and the changes I made to get it working by keeping it in version control.

    But upon attempting to do so, I found that just a bunch of long lists of commands on their own isn’t terribly useful so I started to group all the lists up, attempting to find commonalities by things like server or service, and then starting organize them better into scripts for different roles and intents that I could apply to any server or service, and over time this started to develop into quite a library of scripts. As I was doing this organizing I realized that as long as I made sure the script was functionally idempotent (doesn’t change behaviors or duplicate work when run repeatedly, it’s an important concept) I can guarantee that all my commands are properly documented and also that they have all been run – and if they haven’t, or I’m not sure, I can just run the script again as it’s supposed to always be safe to re-run no matter what state the system is in. So I started moving more and more to this strategy, until I realized that if I just organized this well enough, and made the scripts run automatically when they are changed or updated, I could not only improve my guarantees of having all these commands reliably run, but also quickly run them on many different servers and services all at once without even having to think about it.

    There are some downsides of course, this leaves the potential of bugs in the scripts that make it not idempotent or not safe to re-run, and the only thing I can do is try to make sure they don’t happen, and if they do, identify and fix these bugs when they happen. The next step is probably to have some kind of testing process and environment (preferably automated) but now I’m really getting into the weeds. But at least I don’t really have any concerns that my system is undocumented anymore. I can quickly reference almost anything it’s doing or how it’s set up. That said, one other risk is that the system of scripts and automation becomes so complex that they start being too complex to quickly untangle, and at that point I’ll need better documentation for them. And ultimately you get into a circle of how do you validate the things your scripts are doing are actually working and doing what you expect them to do and that nothing is being missed, and usually you run back into the same ideas that doomed your documentation from the start, consistency and accuracy.

    It also opens an attack vector, where somebody gaining access to these scripts not only gains all the most detailed knowledge of how your system is configured but also the potential to inject commands into those scripts and run them anywhere, so you have to make sure to treat these scripts and systems like the crown jewels they are. If they are compromised, you are in serious trouble.

    By now I have of course realized (and you all probably have too) that I have independently re-invented infrastructure-as-code. There are tools and systems (ansible and terraform come to mind) to help you do this, and at some point I may decide to take advantage of them but personally I’m not there yet. Maybe soon. If you want to skip the intermediate steps I did, you might even be able to skip directly to that approach. But personally I think there is value in the process, it helps defining your needs and building your understanding that there really isn’t anything magical going on behind the scenes and that may help prevent these tools from turning into a black box which isn’t actually going to help you understand your system.

    Do I have a perfect system? Of course not. In a lot of ways it’s probably horrific and I’m sure there are more experienced professionals out there cringing or perhaps already furiously warming up their keyboards. But I learned a lot, understand a lot more than I did when I started, and you can too. Maybe you’ll follow the same path I did, maybe you won’t. But you’ll get there.




  • Nextcloud is just really slow. It is what it is, I don’t use it for any things that are huge, numerous, or need speed. For that I use SyncThing or something even more specialized depending on what exactly I’m trying to do.

    Nextcloud is just my easy and convenient little dropbox, and I treat it like it’s an oldschool free dropbox with limited space that’s going to nag me to upgrade if I put too much stuff in it. It won’t nag me to upgrade, but it will get slow. So I just don’t stress it out. So I only use it to store little convenience things that I want easy access to on all my machines without any fuss. For documents and “home directory” and syncing my calendars and stuff like that it’s great and serves the purpose.

    I haven’t used Seafile. Features sound good, minus the AI buzzword soup, but it looks a little too corporate-enterprisey for me, with minimal commitment to open source and no actual link to anything open source on their website, I don’t doubt that it exists, somewhere, but that raises red flags for potential future (if not in-progress) enshittification to me. After eventually finding their github repo (with no help from them) I finally found a link to build instructions and… it’s a broken link. They don’t seem to actually be looking for contributions or they’re just going through the motions. Open source “community” is clearly not the target audience for their “community edition”, not really.

    I’ll stick to SyncThing.


  • According to the protocol they share (ActivityPub) communities and hashtags are essentially the same thing, they’re a grouping containing many posts. Typing out a hashtag is how you tell Mastodon to add your post to that “hashtag group” (and you can add your post to multiple hashtags). In Lemmy, the community you post in IS the group (and you can cross-post it to multiple communities). The result is the same. They’re the same thing, just different ways of connecting your posts into them, and displayed in very different ways depending on which part of the Fediverse you’re using.