Data exfiltration was the most common malware in Sonatype report, with more than 4,400 packages designed to steal secrets, personally identifiable information, credentials, and API tokens.

  • DapperPenguin@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I’m far from an expert, but we know it takes a village.

    As far as static analysis goes, I can think of something quite simple. Running strace on your processes to see what sort of syscall and filesystem access the process needs (in a trusted scenario - a maintainers burden). Once that analysis is done, applying proper security features (in unix - seccomp filtering (for syscalls) and landlock (for filesystem access)) could be done to minimize risk.

    A caveat to this, however, can be seen in the xz attack. The attacker forced the landlock feature to not compile or link, which allowed it to have the attack surface needed. So they were practicing good security, however it means nothing if maintainers cannot audit their own commits. That is more of a general programming static analysis I believe you were going for. In which case, I believe many compilers come with verbose static analysis features. Clang-tidy is one, for example. Rust is already quite verbose. Perhaps with more rigid CI/CD restrictions enforced with these analysis tools, such commits would not be able to make it through?

    • I’m happy to participate, but we don’t have a process yet.

      Let’s say I do audit a specific version of a dependency I use. How do I communicate to others that I’ve done this? Why would anyone trust me anyway? I’ve mentioned that 'm not an infosec expert; how much is my audit worth?

      I have before run programs inside firejail and watched for network activity where there shouldn’t be any, but even if that is a useful activity, how do I publish my results so that not everyone has to also run the same program in firejail? What do non-technical users do? And this active approach has three problems: 1) you’ll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage. 2) These supply chain attacks can be sophisticated, and I wouldn’t be surprised if you can tell that you’re running in firejail and just not execute the code. 3) This approach isn’t useful for programs which depend on network connections, or access to secrets - some programs need both. In an extreme example, there’d be no way to expose a supply chain attack embedded in a browser, which often both has access to secrets and who’s main purpose is networking.

      The main problem is that we’re in the decade of Linux, and a whole population of people are coming in who are not nerds. They’re not going to be running strace or firejail. How are we going to make OSS secure for these people?

      • DapperPenguin@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Let’s say I do audit a specific version of a dependency I use. How do I communicate to others that I’ve done this? Why would anyone trust me anyway? I’ve mentioned that 'm not an infosec expert; how much is my audit worth?

        Here is an example of how outsourced/decentralized audits can be reported to at least a centralized organization https://rustsec.org/advisories/ . And you can install cargo install cargo-audit which will report to you the nature of your rust crate’s dependencies and if their selected versions are under any active advisory.

        1. you’ll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage

        In this manner, I would believe at the end of the day it’s a sort of attempting to solve the halting problem - which cannot be done. This sort of research is something that would probably take sophisticated AI agents to poor through code and detect attack vectors, like unsanitized command parsing.

        Otherwise, a general code checker like clang-tidy won’t throw a red flag for a program that correctly parses your $HOME directory and sends it to a random server. That is valid code after all. Unless there is a technology to clearly define sandboxing constraints before compilation (or runtime). That is why I gave the example of using seccomp and landlock to clearly define runtime behavior. Maybe there’s better solutions where you can generate, say, a CSV table of what you whitelist and either at compile/runtime they get satisfied - for example unzipping a file. It will only know at runtime what file it needs to read and parse, so of course at run time it will have to read that file (which you cannot know at compile time). I don’t want to word vomit here, so I’ll leave it at that.

        1. These supply chain attacks can be sophisticated, and I wouldn’t be surprised if you can tell that you’re running in firejail and just not execute the code

        Yes, certainly. Software can determine the state of its environment. Look at web browsers for this - it is practically impossible to get away from the fingerprinting problem. The following is my speculation and may be incorrect: It is practically impossible unless environment standards are made to address this. For example, firefox forcing every browser to use uBlock (or go more extreme and kill all javascript from running). Yes on the one hand it will break half of the web, but the standard would be a much smaller set of identifying features in the population. Same could potentially be done with running processes in an operating system. Operating systems and web browsers both are highly complex systems and I cannot say I know better than the folks making big decisions there, and I’m only speaking in ideals.

        1. This approach isn’t useful for programs which depend on network connections, or access to secrets - some programs need both.

        I feel as though my replies have been too long already on subjects I’m no expert on. Networking is a whole other beast to tackle, as even if you make valid connection attempts with proper code integrity, it can still be intercepted via MITM or their servers could be compromised and steal your data/attack your system from there.

        The main problem is that we’re in the decade of Linux, and a whole population of people are coming in who are not nerds. They’re not going to be running strace or firejail. How are we going to make OSS secure for these people?

        I don’t want to use the term “fear mongering”, I think you may be a bit too concerned here. I don’t think the average Joe or Jill is going to be interacting with all sorts of random obscure FOSS projects like us more technical users are who program or experiment with services ourselves. They will stick to highly vetted and supported projects, and if those get attacked then lots of people will be affected and monitoring the situation. Normies probably stick around big corporate spaces anyway (youtube, google, facebook, twitter, steam). All of these places deal with attacks of course, regardless of users being on linux or windows.

        Windows had an attack not so long ago where if you sent a malformed ipv6 packet you obtained RCE permissions on the host! Just because it’s not FOSS doesn’t mean it won’t get attacked.