• 0 Posts
  • 185 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • The remote access devices can be a good thing. The issue is one of control. Given the software driven nature and complexity of devices, bugs are inevitable. Having a way for the manufacturer to distribute those updates remotely is a good thing as it lowers costs, and makes it more likely the updates get deployed. That said, the ability to enable and disable that remote access system needs to be in the hands of the customer, not the manufacturer.

    As an example, many years ago I worked for a company which manufactured physical access control systems (think those stinking badges and readers at office buildings). And we had two scenarios come up which illustrate the issue quite well. In the first case, the hardware which controlled the individual doors had a bug which caused the doors to fail unlocked. And based on the age of the hardware the only way to update the firmware was to physically go to the device and replace an EEPROM. I spent a very long day wandering a customer’s site climbing a ladder over and over again. This was slow, expensive and just generally not a great experience for anyone involved. In the second case, there were database issues with a customer’s server. At that time, these systems weren’t internet connected so that route for support didn’t exist. However, we shipped each system with a modem and remote access software. So, the customer hooked up the modem, gave us a number to dial in and we fixed the problem fairly quickly. The customer then unplugged the modem and went about breaking the system again.

    Having a way for the manufacturer to connect and support the system is important. They just shouldn’t have free run of the system at all times. The customer should also be told about the remote support system before buying the system and be able to turn it off. Sure, it’s possible to have reasonably secure remote logins on the internet (see: SSH or VPN), but it’s far more secure to just not have the service exposed at all. How many routers have been hacked because the manufacturers decided to create and leave in backdoors?


  • The main thing I have from that time is several large boxes hanging about taking up shelf space and a burning hatred of MMOs. My wife and I got into WoW during late Vanilla. We stood in line at midnight to get the collector’s edition box for WotLK and later again for Cataclysm (we weren’t that far gone when The Burning Crusade released). Shortly after Cataclysm released, there was the Midsummer Fire Festival and as we were playing through it, we hit that wall where any more quests became locked behind “Do these daily quests 10,000 times to progress” and the whole suspension of disbelief just came crashing down. I had already hated daily quests and the grindy elements of the game, but at that moment I just said, “fuck this” and walked away from the game.

    I do look back fondly on some of the good times we had in the game. Certainly in Vanilla there was some amazing writing and world crafting. We met some good people and had a lot of fun over the years and I don’t regret the time or money spent. However, one thing it taught me is just how pointless MMOs are. They are specifically designed to be endless treadmills. And this can be OK, so long as the treadmill itself is well designed and fun. But, so many of the elements exist just to eat time. Instead of being fun, they suck the fun out of the game and turn it into a job.

    We even tried a few other MMOs after that point (e.g. Star Wars) just because we wanted something to fill that niche in our gaming time. But invariably, there would be the grind mechanics which ruined the game for us. Or worse yet, pay to win mechanics where the game would literally dangle offers of “pay $X to shortcut this pointless grind” (ESO pops to mind for this). If the game is offering me ways to pay money to not play the game, then I’ll take the easier route and not play the game at all, thank you very much.

    So ya, WoW taught me to hate MMOs and grinding in games. And that’s good, I guess.


  • What you are trying to do is called P2V, for Physical to Virtual. VMWare used to have tools specifically for this. I haven’t used them in a decade or more, but they likely still work. That should let you spin up the virtual system in VMWare Player (I’d test this before wiping the drive) and you can likely convert the resulting VM to other formats (e.g. VirtualBox). Again, test it out before wiping the drive, nothing sucks like discovering you lost data because you just had to rush things.



  • It would be interesting to see someone with the background to understand the arguments involved in the paper give it a good review.

    That said, I’ve never brought the simulation hypothesis on the simple grounds of compute resources. Part of the argument tends to be the idea of an infinite recursion of simulations, making the possible number of simulations infinite. This has one minor issue, where are all those simulations running? If the top level (call it U0 for Universe 0) is running a simulation (U1) and that simulation decides to run its own simulation (U2), where is U2 running? While the naive answer is U1, this cannot actually be true. U1 doesn’t actually exist, everything it it doing is actually being run up in U0. Therefore, for U1 to think it’s running U2, U0 needs to simulate U2 and pipe the results into U1. And this logic continues for every sub-simulation run. They must all be simulated by U0. And while U0 may have vast resources dedicated to their simulation, they do not have infinite resources and would have to limit the number of sub-simulation which could be run.






  • My bet is on it never getting completed. It’s going to be a running grift over the next few years. There will be delay after delay after delay with multiple “independent” contractors rolling through to deal with whatever the current delay is. Those contractors will be chosen via a competitive bid process,. The company bidding the highest kickbacks to Trump being awarded the contract. At the end of the Trump administration, anything actually constructed on the grounds will need to be torn down due to engineering failures, and multitudes of bugs planted by foreign spy agencies.




  • If the goal is stability, I would have likely started with an immutable OS. This creates certain assurances for the base OS to be in a known good state.
    With that base, I’d tend towards:
    Flatpak > Container > AppImage

    My reasoning for this being:

    1. Installing software should not effect the base OS (nor can it with an immutable OS). Changes to the base OS and system libraries are a major source of instability and dependency hell. So, everything should be self contained.
    2. Installing one software package should not effect another software package. This is basically pushing software towards being immutable as well. The install of Software Package 1, should have no way to bork Software Package 2. Hence the need for isolating those packages as flatpaks, AppImages or containers.
    3. Software should be updated (even on Linux, install your fucking updates). This is why I have Flatpak at the top of the list, it has a built in mechanism for updating. Container images can be made to update reasonably automatically, but have risks. By using something like docker-compose and having services tied to the “:latest” tag, images would auto-update. However, its possible to have stacks where a breaking change is made in one service before another service is able to deal with it. So, I tend to tag things to specific versions and update those manually. Finally, while I really like AppImages, updating them is 100% manual.

    This leaves the question of apt packages or doing installs via make. And the answer is: don’t do that. If there is not a flatpak, appimage, or pre-made container, make your own container. Docker files are really simple. Sure, they can get super complex and do some amazing stuff. You don’t need that for a single software package. Make simple, reasonable choices and keep all the craziness of that software package walled off from everything else.


  • Traditions exist to pass on learned knowledge and for social cohesion. Prior to widespread education, many local groups had to learn the same lessons and find a way to pass those on from person to person and generation to generation. Given that this also tended to coincide with societies not having the best grasp on reality (germ theory is not that old), the knowledge being passed on was often specious. But, it might also contain useful bits which worked.

    For example some early societies would pack honey into a wound. Why? Fuck if they knew, but that was what the wise men said to do. It turns out that honey is a natural anti-septic and helps to prevent infection. They had no knowledge of this, but had built up a tradition around it, probably because it seemed to work. And so that got passed on.

    The other aspect of traditions is social. When people do a thing together, they tend to bond and become willing to engage in more pro-social behaviors. It isn’t all that important what the activity it, so long as people do it together. The more people feel like they are part of the in-group, the more they will work to protect and sacrifice for that in-group.

    Sure, a lot of traditions are complete crap. They are superstition wrapped in a “that’s the way we’ve always done it” attitude. But it’s important not to overlook their significance to a population. The Christian Church ran headlong into this time and again through European history as they sought to convert various groups. Those groups tended to hold on to old traditions and just blended them into Christianity. This resulted in a fairly fractured religious landscape, but the Church generally tolerated it, because trying to quash it led to too many problems. While stories of various Easter and Christmas traditions being Pagan in origin are likely apocryphal, there are echos of older religious beliefs hanging about.

    It’s best to be careful when looking at a particular group’s traditions and calling them “backwards” or some other epitaph. Yes, they almost certainly have no basis in the scientific method. But, the value of those traditions to a people are very real. And so long as they are not harmful to others, you’re likely to do more harm trying to remove them than by simply allowing folks to just enjoy them.


  • It’s going to depend on what types of data you are looking to protect, how you have your wifi configured, what type of sites you are accessing and whom you are willing to trust.

    To start with, if you are accessing unencypted websites (HTTP) at least part of the communications will be in the clear and open to inspection. You can mitigate this somewhat with a VPN. However, this means that you need to implicitly trust the VPN provider with a lot of data. Your communications to the VPN provider would be encrypted, though anyone observing your connection (e.g. your ISP) would be able to see that you are communicating with that VPN provider. And any communications from the VPN provider to/from the unencrypted website would also be in the clear and could be read by someone sniffing the VPN exit node’s traffic (e.g. the ISP used by the VPN exit node) Lastly, the VPN provider would have a very clear view of the traffic and be able to associate it with you.

    For encrypted websites (HTTPS), the data portion of the communications will usually be well encrypted and safe from spying (more on this in a sec). However, it may be possible for someone (e.g. your ISP) to snoop on what domains you are visiting. There are two common ways to do this. The first is via DNS requests. Any time you visit a website, your browser will need to translate the domain name to an IP address. This is what DNS does and it is not encrypted by default. Also, unless you have taken steps to avoid it, it likely your ISP is providing DNS for you. This means that they can just log all your requests, giving them a good view of the domains you are visiting. You can use something like DNS Over Https (DOH), which does encrypt DNS requests and goes to specific servers; but, this usually requires extra setup and will work regardless of using your local WiFi or a 5g/4g network. The second way to track HTTPS connections is via a process called Server Name Identification (SNI). In short, when you first connect to a web server your browser needs to tell that server which domain it wants to connect to, so that the server can send back the correct TLS certificate. This is all unencrypted and anyone inbetween (e.g. your ISP) can simply read that SNI request to know what domains you are connecting to. There are mitigations for this, specifically Encrypted Server Name Identification (ESNI), but that requires the web server to implement it, and it’s not widely used. This is also where a VPN can be useful, as the SNI request is encrypted between your system and the VPN exit node. Though again, it puts a lot of trust in the VPN provider and the VPN provider’s ISP could still see the SNI request as it leaves the VPN network. Though, associating it with you specifically might be hard.

    As for the encrypted data of an HTTPS connection, it is generally safe. So, someone might know you are visiting lemmy.ml, but they wouldn’t be able to see what communities you are reading or what you are posting. That is, unless either your device or the server are compromised. This is why mobile device malware is a common attack vector for the State level threat actors. If they have malware on your device, then all the encryption in the world ain’t helping you. There are also some attacks around forcing your browser to use weaker encryption or even the attacker compromising the server’s certificate. Though these are likely in the realm of targeted attacks and unlikely to be used on a mass scale.

    So ya, not exactly an ELI5 answer, as there isn’t a simple answer. To try and simplify, if you are visiting encrypted websites (HTTPS) and you don’t mind your mobile carrier knowing what domains you are visiting, and your device isn’t compromised, then mobile data is fine. If you would prefer your home ISP being the one tracking you, then use your home wifi. If you don’t like either of them tracking you, then you’ll need to pick a VPN provider you feel comfortable with knowing what sites you are visiting and use their software on your device. And if your device is compromised, well you’re fucked anyway and it doesn’t matter what network you are using.


  • No, a game should be what the devs decide to make. That said, it can cut off a part of the market. I’m another one of those folks who tends to avoid PvPvE games, without a dedicated PvE only side. This weekend’s Arc Raiders playtest was a good example. I read through the description on Steam and just decided, “na, I have better things to do with my time.” Unfortunately, those sorts of games tend to have a problem with griefers running about directly trying to ruin other peoples’ enjoyment. I’ll freely admit that I will never be as good as someone who is willing to put the hours into gear grinding, practice and map memorization in such a game. I just don’t enjoy that and that means I will always be at a severe disadvantage. So, why sped my time and money on such a game?

    This can lead to problem for such games, unless they have a very large player base. The Dark Souls series was a good example, which has the in-built forced PvP system, though you can kinda avoid it for solo play. And it still has a large player base. But, I’d also point out some of the the controversy around the Seamless Co-op mod for Elden Ring. When it released, the PvP players were howling from the walls about how long it made invasion queues. Since Seamless Co-op meant that the players using it were removed from the official servers, the number of easy targets to invade went way, way down. It seemed like a lot of folks like to have co-op, without the risks of invasion.

    As a longer answer to this, let me recommend two videos from Extra Credits:

    These videos provide a way to think about players and how they interact with games and each other.




  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlAntiviruses?
    link
    fedilink
    English
    arrow-up
    2
    ·
    26 days ago

    Ultimately, it’s going to be down to your risk profile. What do you have on your machine which would wouldn’t want to lose or have released publicly? For many folks, we have things like pictures and personal documents which we would be rather upset about if they ended up ransomed. And sadly, ransomware exists for Linux. Lockbit, for example is known to have a Linux variant. And this is something which does not require root access to do damage. Most of the stuff you care about as a user exists in user space and is therefore susceptible to malware running in a user context.

    The upshot is that due care can prevent a lot of malware. Don’t download pirated software, don’t run random scripts/binaries you find on the internet, watch for scam sites trying to convince you to paste random bash commands into the console (Clickfix is after Linux now). But, people make mistakes and it’s entirely possible you’ll make one and get nailed. If you feel the need to pull stuff down from the internet regularly, you might want to have something running as a last line of defense.

    That said, ClamAV is probably sufficient. It has a real-time scanning daemon and you can run regular, scheduled scans. For most home users, that’s enough. It won’t catch anything truly novel, but most people don’t get hit by the truly novel stuff. It’s more likely you’ll be browsing for porn/pirated movies and either get served a Clickfix/Fake AV page or you’ll get tricked into running a binary you thought was a movie. Most of these will be known attacks and should be caught by A/V. Of course, nothing is perfect. So, have good backups as well.