Data exfiltration was the most common malware in Sonatype report, with more than 4,400 packages designed to steal secrets, personally identifiable information, credentials, and API tokens.

  • Yeah, this is becoming a real issue.

    We need better tooling for performing static analysis. I recently updated a version of a package and the audit - which I can in no way perform with any authority - was time consuming because of the extensive dependency tree. I both feel more compelled to do audits, and have started hating them; they’re the least fun part of developing OSS, and I really only do it because it’s fun. When it stops being fun, I’m going to stop doing it.

    That’s entirely aside from the fact that it puts a damper on the entire ecosystem for users, of which I’m also clearly one.

    The OSS community needs (someone smarter and more informed about infosec than me) needs to come up with a response, or this is going to kill OSS as surely as Microsoft never could.

    • DapperPenguin@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I’m far from an expert, but we know it takes a village.

      As far as static analysis goes, I can think of something quite simple. Running strace on your processes to see what sort of syscall and filesystem access the process needs (in a trusted scenario - a maintainers burden). Once that analysis is done, applying proper security features (in unix - seccomp filtering (for syscalls) and landlock (for filesystem access)) could be done to minimize risk.

      A caveat to this, however, can be seen in the xz attack. The attacker forced the landlock feature to not compile or link, which allowed it to have the attack surface needed. So they were practicing good security, however it means nothing if maintainers cannot audit their own commits. That is more of a general programming static analysis I believe you were going for. In which case, I believe many compilers come with verbose static analysis features. Clang-tidy is one, for example. Rust is already quite verbose. Perhaps with more rigid CI/CD restrictions enforced with these analysis tools, such commits would not be able to make it through?

      • I’m happy to participate, but we don’t have a process yet.

        Let’s say I do audit a specific version of a dependency I use. How do I communicate to others that I’ve done this? Why would anyone trust me anyway? I’ve mentioned that 'm not an infosec expert; how much is my audit worth?

        I have before run programs inside firejail and watched for network activity where there shouldn’t be any, but even if that is a useful activity, how do I publish my results so that not everyone has to also run the same program in firejail? What do non-technical users do? And this active approach has three problems: 1) you’ll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage. 2) These supply chain attacks can be sophisticated, and I wouldn’t be surprised if you can tell that you’re running in firejail and just not execute the code. 3) This approach isn’t useful for programs which depend on network connections, or access to secrets - some programs need both. In an extreme example, there’d be no way to expose a supply chain attack embedded in a browser, which often both has access to secrets and who’s main purpose is networking.

        The main problem is that we’re in the decade of Linux, and a whole population of people are coming in who are not nerds. They’re not going to be running strace or firejail. How are we going to make OSS secure for these people?

        • DapperPenguin@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Let’s say I do audit a specific version of a dependency I use. How do I communicate to others that I’ve done this? Why would anyone trust me anyway? I’ve mentioned that 'm not an infosec expert; how much is my audit worth?

          Here is an example of how outsourced/decentralized audits can be reported to at least a centralized organization https://rustsec.org/advisories/ . And you can install cargo install cargo-audit which will report to you the nature of your rust crate’s dependencies and if their selected versions are under any active advisory.

          1. you’ll only see the malicious activity if you hit the branch path of the attack; looking for it this way is like doing unit tests by running the code and hoping you get 100% code coverage

          In this manner, I would believe at the end of the day it’s a sort of attempting to solve the halting problem - which cannot be done. This sort of research is something that would probably take sophisticated AI agents to poor through code and detect attack vectors, like unsanitized command parsing.

          Otherwise, a general code checker like clang-tidy won’t throw a red flag for a program that correctly parses your $HOME directory and sends it to a random server. That is valid code after all. Unless there is a technology to clearly define sandboxing constraints before compilation (or runtime). That is why I gave the example of using seccomp and landlock to clearly define runtime behavior. Maybe there’s better solutions where you can generate, say, a CSV table of what you whitelist and either at compile/runtime they get satisfied - for example unzipping a file. It will only know at runtime what file it needs to read and parse, so of course at run time it will have to read that file (which you cannot know at compile time). I don’t want to word vomit here, so I’ll leave it at that.

          1. These supply chain attacks can be sophisticated, and I wouldn’t be surprised if you can tell that you’re running in firejail and just not execute the code

          Yes, certainly. Software can determine the state of its environment. Look at web browsers for this - it is practically impossible to get away from the fingerprinting problem. The following is my speculation and may be incorrect: It is practically impossible unless environment standards are made to address this. For example, firefox forcing every browser to use uBlock (or go more extreme and kill all javascript from running). Yes on the one hand it will break half of the web, but the standard would be a much smaller set of identifying features in the population. Same could potentially be done with running processes in an operating system. Operating systems and web browsers both are highly complex systems and I cannot say I know better than the folks making big decisions there, and I’m only speaking in ideals.

          1. This approach isn’t useful for programs which depend on network connections, or access to secrets - some programs need both.

          I feel as though my replies have been too long already on subjects I’m no expert on. Networking is a whole other beast to tackle, as even if you make valid connection attempts with proper code integrity, it can still be intercepted via MITM or their servers could be compromised and steal your data/attack your system from there.

          The main problem is that we’re in the decade of Linux, and a whole population of people are coming in who are not nerds. They’re not going to be running strace or firejail. How are we going to make OSS secure for these people?

          I don’t want to use the term “fear mongering”, I think you may be a bit too concerned here. I don’t think the average Joe or Jill is going to be interacting with all sorts of random obscure FOSS projects like us more technical users are who program or experiment with services ourselves. They will stick to highly vetted and supported projects, and if those get attacked then lots of people will be affected and monitoring the situation. Normies probably stick around big corporate spaces anyway (youtube, google, facebook, twitter, steam). All of these places deal with attacks of course, regardless of users being on linux or windows.

          Windows had an attack not so long ago where if you sent a malformed ipv6 packet you obtained RCE permissions on the host! Just because it’s not FOSS doesn’t mean it won’t get attacked.

          • I don’t want to use the term “fear mongering”, I think you may be a bit too concerned here.

            I’m concerned because I maintain numerous OSS projects and I now have to be justifiably concerned about supply chain attacks. Even Go projects tend to pull in tons of dependencies, and there’s a pattern I’m increasingly countering where some library will claim to be a “lightweight” or “small” library for X, but then I pulls in a dozen other projects each pulling in their dependencies. It isn’t lightweight if even one dependency is heavy, and I wish people would stop making this claim. But the security impact is that now there are dozens of projects I have to audit every time one of those dependencies does a version bump and I take it.

            This is an issue. It is an impediment to the people contributing to the Bazaar; it disincentivizes both developing and using OSS, and it’s harmful especially now when Linux is gaining more widespread popularity. I believe we need a concerted reaction.

            Go needs better security-focused static code analysis tools; there are any number of code quality checkers, but there are precious few security checkers and the ones that exist focus on developer practices, such as string sanitization. I want a reporting tool that will identify which of my dependencies make network connections, and where, and what kind of information is being sent, so that I can focus my audits. Ideally, the Go team would run a service that provides a health check for a package - a third party analysis users (developers and end users) can trust… but at this point I’d pay a monthly fee to be able to submit packages and get a badge.

            I think someone with InfoSec expertise could do a reasonable job with at least the statically compiled, modern languages, but I agree with your comment about it taking a community. If each PL community provided a static code security analysis tool, someone would eventually write a self-hostable service that could provide a score for most projects; at that point I’d expect this to become the purview of distributions - it’d be a significant value-add, a greater contribution than making yet another Ubuntu derivative that varies only be the default DE.

            Perhaps there are other tools such as LLM-driven code analysis; I’d expect that would be more effective with a model specifically trained to look for supply-chain attacks.

            I also contribute manifests to a couple of distributions, and I know neither of them do security gate keeping on the packages submitted by the simple fact that the time between submission and acceptance is too short for anyone to have performed an analysis.

            This is going to bite us; the damage it’s going to cause to OSS will be far worse if we, as a global community, react to a broadly newsworthy event than it will be if we’re proactive and prevent it.

            I don’t think the average Joe or Jill is going to be interacting with all sorts of random obscure FOSS projects like us more technical users are who program or experiment with services ourselves.

            Windows had an attack not so long ago

            Windows was long called less secure by Linux advocates merely due to the fact that virus makers were ignoring Linux as being too small to care about. That changed as the world’s internet infrastructure transitioned to being dominated by Linux.

            The issue I’m concerned about specifically is FOSS, regardless of the platform. In a full half of the projects I maintain, I create release builds for Linux, Windows, OSX (Darwin), and OpenBSD. The attack is on the FOSS model, where software is freely exchanged.

            We are welcoming an entirely new wave of Windows refugees, many of whom are less technical. They’re mostly going to be using FOSS when they arrive, and the nature of supply-chain attacks is that they can show up in any program, even main packages included in KDE, for example. Yes, they can also show up in commercial software, but unlike community-driven FOSS, commercial entities have the means to perform security audits and consumers have some legal recourse - an organization to litigate against.

            I’m advocating for a concerted, proactive effort by InfoSec specialists in the FOSS community to come up with 1) a manifesto about how we’re going to respond to supply-chain attacks and malicious software, 2) tooling to help developers audit their dependencies in whatever PL they’re using, and 3) some mechanism of publishing results, even if it’s self-hosted. In the last case, diligent users will check multiple hosters against each other, and probably a couple will emerge as “trusted providers;” if the Go teem hosted such a service, it would become the defacto authority. The Rust team could do the same.