xz backdoor
The Xz Backdoor Highlights the Vulnerability of Open Source Software—and Its Strengths

Jason Koebler · Mar 30, 2024 at 3:27 PM

The backdoor highlights the politics, governance, and community management of an ecosystem exploited by massive tech companies and largely run by volunteers.

Image: Zulian Firmansyah, Unsplash

Friday afternoon, Andres Freund, a software developer at Microsoft, sent an email to a listserv of open source software developers with the subject line “backdoor in upstream xz/liblzma leading to ssh server compromise.” What Freund had stumbled upon was a malicious backdoor in xz Utils, a compression utility used in many major distributions of Linux, that increasingly seems like it was purposefully put there by a trusted maintainer of the open source project. The “xz backdoor” has quickly become one of the most important and most-discussed vulnerabilities in recent memory.

Ars Technica has a detailed writeup of the technical aspects of the backdoor, which intentionally interfered with SSH encryption, which is a security protocol that allows for secure connections over unsecured networks. The specific technical details are still being debated, but basically, a vulnerability was introduced into a very widely-used utility that chains into a type of encryption that is used by many important internet servers. Luckily, this specific backdoor seems like it was caught before it was introduced into the code of major Linux distributions.

Alex Stamos, the chief trust officer of SentinelOne and a lecturer at Stanford’s Internet Observatory called the discovery of this backdoor “the most interesting hack of the year.”

This is because the mechanism of the attack highlights both the strengths and weaknesses of open source software and the ecosystem under which open source software is developed, and the extent to which the internet and massive tech companies rely on an ecosystem that is largely run by volunteers.

In this case, the vulnerabilities were introduced by a coder who goes by the name Jia Tan (JiaT75 on GitHub) who was a “maintainer” of the xz Utils codebase, meaning they could make commits (update the software’s code) without oversight from others. Critically, Tan has been one of the maintainers of xz Utils for almost two years and also maintains other critical open source projects. This raises the possibility, of course, that they have always been a bad actor and could have been introducing vulnerabilities into earlier versions of xz Utils and other open source projects.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” Freund wrote in his initial email.

The open source community is now doing a mix of collaborative damage control, soul searching, and infighting over the backdoor, how to respond to it, and what it means for the broader open source ecosystem. The xz backdoor was seemingly caught before it made its way into major Linux distributions, which hopefully means that there will not be widespread damage caused by the backdoor. But it is, at best, a close call that Freund himself said was essentially “accidentally” discovered.

This is all important because huge parts of the internet and software infrastructure rely on free and open source software that is often maintained by volunteer software developers. This has always been a controversial and complicated state of affairs, because big tech companies take this software, use it in their products, and make a lot of money from them. Many of these open source codebases are maintained by a small number of people doing it on a volunteer basis, and many of these projects have complicated politics about who is allowed to be a maintainer and how a project should be maintained and developed. If a trusted maintainer of a critical open source codebase is actually a malicious hacker, vulnerabilities could be introduced into widely used, critical software and chaos could ensue.

Stamos noted that the backdoor “proves what everybody suspected about the supply-chain risks of OSS. Should hopefully drive some serious investment by the companies that profit from open-source to look for back doors using scalable means.”

The backdoor highlights open source software’s strengths and its weaknesses in that, well, everything is happening in the open.

While a malicious maintainer can commit code that introduces a backdoor, the community can also actively analyze the code and trace exactly what was introduced, when it was introduced, who did it, and what the code does. The project can (and is) rolling back its codebase to an earlier distribution before the vulnerability was introduced. The coding history and email arguments of that user can be traced over time, and the broader developer community can make educated guesses about how this all happened. As I’m writing this, coders are analyzing Jia Tan’s contributions to other projects and the political discussions in listservs that led to them becoming a trusted maintainer in the first place.

On the open source software security listserv, developers are trying to make sense of what happened, and are debating about how and when the discovery of the vulnerability should have been made public (the discovery was made one day before it was distributed to the broader listserv). Tavis Ormandy, a very famous white hat hacker and security researcher who works for Google, wrote on the listserv, “I would have argued for immediately discussing this in the open.”

“We’re delaying everybody else’s ability to react,” he added. Others argued that making the vulnerability known immediately could have incentivized attackers to exploit the bug, or could have allowed others to do so. On Mastodon, software developers are criticizing Microsoft and GitHub for taking down some of the affected code repositories as people are trying to analyze it.

“Hey, it’s totally cool that Microsoft GitHub blocked access to one of the repositories in the very center of the xz backdoor saga,” Michal Woźniak, a white hat hacker who was part of a team that discovered DRM in a Polish train earlier this year wrote on Mastodon. “It’s not like a bunch of people are scrambling to try to make sense of all the right now, or that specific commits got linked to directly from media and blogposts and the like. Cool, cool.” Other coders mused that Copilot, a subscription AI coding assistant created by GitHub, could have integrated some of the malicious code into its training data.

All of this discussion and many of these issues are not normally possible when a vulnerability is discovered in closed source software, which is kept private by the company and whose governance is determined by the companies releasing a product. And that’s what makes all of this so interesting. Not only is vulnerability mitigation being managed in public, but so is the culture, politics, supply chain, and economics that governs this type of critically important software. About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.
More from Jason Koebler

  • @[email protected]
    link
    fedilink
    English
    139 months ago

    I wonder what sort of mitigations we can take to prevent such kind of attacks, wherein someone contributes to an open-source project to gain trust and to ultimately work towards making users of that software vulnerable. Besides analyzing with bigger scrutiny other people’s contributions (as the article mentioned), I don’t see what else one could do. There are many ways vulnerabilities can be introduced and a lot of them are hard to spot (especially in C with stuff like undefined behavior and lack of modern safety features) , so I don’t think “being more careful” is going to be enough.

    I imagine such attacks will become more common now, and that these kind of attacks could become very appealing for governments.

    • @[email protected]
      link
      fedilink
      129 months ago

      As far as I understand, in this case opaque binary test data was gradually added to the repository. Also the built binaries did not correspond 1:1 with the code in the repo due to some buildchain reasons. Stuff like this makes it difficult to spot deliberately placed bugs or backdors.

      I think some measures can be:

      • establish reproducible builds in CI/CD pipelines
      • ban opaque data from the repository. I read some people expressing justification for this test-data being opaque, but that is nonsense. There’s no reason why you couldn’t compress+decompress a lengthy creative commons text, or for binary data encrypt that text with a public password, or use a sequence from a pseudo random number generator with a known seed, or a past compiled binary of this very software, or … or … or …
      • establish technologies that make it hard to place integer overflows or deliberately miss array ends. That would make it a lot harder to plant a misbehavement in the code without it being so obvious that others note easily. Rust, Linters, Valgrind etc. would be useful things for that.

      So I think from a technical perspective there are ways to at least give attackers a hard time when trying to place covert backdoors. The larger problem is likely who does the work, because scalability is just such a hard problem with open source. Ultimately I think we need to come together globally and bear this work with many shoulders. For example the “prossimo” project by the Internet Security Research Group (the organisation behind Let’s Encrypt) is working on bringing memory safety to critical projects: https://www.memorysafety.org/ I also sincerely hope the german Sovereign Tech Fund ( https://www.sovereigntechfund.de/ ) takes this incident as a new angle to the outstanding work they’re doing. And ultimately, we need many more such organisations and initiatives from both private companies as well as the public sector to protect the technology that runs our societies together.

      • @gedhrel
        link
        19 months ago

        The test case purported to be bad data, which you presumably want to test the correct behaviour of your dearchiver against.

        Nothing this did looks to involve memory safety. It uses features like ifunc to hook behaviour.

        The notion of reproducible CI is interesting, but there’s nothing preventing this setup from repeatedly producing the same output in (say) a debian package build environment.

        There are many signatures here that look “obvious” with hindsight, but ultimately this comes down to establishing trust. Technical sophistication aside, this was a very successful attack against that teust foundation.

        It’s definitely the case that the stack of C tooling for builds (CMakeLists.txt, autotools) makes obfuscating content easier. You might point at modern build tooling like cargo as an alternative - however, build.rs and proc macros are not typically sandboxed at present. I think it’d be possible to replicate the effects of this attack using that tooling.

    • @[email protected]
      link
      fedilink
      119 months ago

      I’m assuming with as small as a tin foil hat as possible that the CIA and NSA purposefully do this in major closed and open source software

    • @[email protected]
      link
      fedilink
      8
      edit-2
      9 months ago

      In the sysadmin world, the current approach is to follow a zero-trust and defense-in-depth model. Basically you do not trust anything. You assume that there’s already a bad actor/backdoor/vulnerability in your network, and so you work around mitigating that risk - using measures such as compartmentalisation and sandboxing (of data/users/servers/processes etc), role based access controls (RBAC), just-enough-access (JEA), just-in-time access (JIT), attack surface reduction etc.

      Then there’s network level measures such as conditional access, and of course all the usual firewall and reverse-proxy tricks so you’re never exposing a critical service such as ssh directly to the web. And to top it all off, auditing and monitoring - lots of it, combined with ML-backed EDR/XDR solutions that can automatically baseline what’s “normal” across your network, and alert you of any abnormality. The move towards microservices and infrastructure-as-code is also interesting, because instead of running full-fledged VMs you’re just running minimal, ephemeral containers that are constantly destroyed and rebuilt - so any possible malware wouldn’t live very long and would have to work hard at persistence. Of course, it’s still possible for malware to persist in a containerised environment, but again that’s where the defense-in-depth and monitoring comes into play.

      So in the case of xz, say your hacker has access to ssh - so what? The box they got access to was just a jumphost, they can’t get to anywhere else important without knowing what the right boxes and credentials are. And even if those credentials are compromised, with JEA/JIT/MFA, they’re useless. And even if they’re compromised, they’d only get access into a very specific box/area. And the more they traverse across the network, the greater the risk of leaving an audit trail or being spotted by the XDR.

      Naturally none of this is 100% bullet-proof, but then again, nothing is. But that’s exactly what the zero-trust model aims to combat. This is the world we live in, where we can no longer assume something is 100% safe. Proprietary software users have been playing this game for a long time, it’s about time we OSS users also employ the same threat model.

    • chebra
      link
      fedilink
      69 months ago

      @Faresh 1.) Making it easier to analyze. There are multiple steps in the whole process which may be hiding an exploit. The “tarball-not-same-as-git” is a clear example. Sure, reviewing will still be necessary and it will still be difficult, but it doesn’t have to be as difficult as today. 2.) stop giving maintainer rights, fork instead. That’s what pull requests are for. 3.) we should be careful if our critical infrastructure depends on a hobby project - either pay, or don’t depend.