Do you think it will be possible to run GNU/Linux operating systems on Microsoft’s brand new “Copilot+ PCs”? The latter ones were unveiled just yesterday, and honestly, the sales pitch is quite impressive! A Verge article on them: Link

  • @[email protected]
    link
    fedilink
    English
    116 months ago

    Yes, Linux actually does already run on ARM chips, and Qualcomm themselves pledged support for Linux for the new chips within the next 6 months.

    There is one big issue though: most applications won’t be available since they have to work specifically for ARM. This is a big deal because I don’t think Linux has a proper x86 --> ARM translation layer. That means most of your apps and games won’t work.

    I’d wait at least a couple of months to see how the ecosystem is before buying one. It’s likely going to be a very bumpy ride for the next ~2 years until everything you’d need is supported.

    • @[email protected]
      link
      fedilink
      English
      126 months ago

      Most applications are/can be compiled for arm. You just need the right repo or to compile from source.

      Raspberry pi’s are very popular and are arm based already.

      You don’t need a translation layer unless the software is proprietary and the vendor isn’t willing to compile for arm.

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        Most applications are/can be compiled for arm. You just need the right repo or to compile from source.

        Maybe the essentials like browsers and CMD tools are available for ARM, but I’m talking about applications in general. Almost all of Flatpak won’t work, not everyone will bother compiling for ARM and those that do probably won’t do so as soon as these laptops release. You’d have to be a real poweruser to compile stuff from source and not suffer. Not to mention proprietary stuff that are already reluctant to support Linux, imagine how long it’ll take for things like Zoom and Discord to get official support. Before anyone hits me with the “those don’t matter”, it does to a lot of people.

        You don’t need a translation layer unless the software is proprietary and the vendor isn’t willing to compile for arm.

        Even then, there will be tons of legacy apps people will want to run.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          6 months ago

          Have you looked? Almost all software on flathub (definitely the majority) lists aarch64 (ARM). So yes, most things work.

          Again, proprietary software is the main issue here. Open source software is pretty easy to recompile.

          I understand qemu can emulate x86 in those cases, with a performance hit.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      6 months ago

      Wow. That settles the discussion pretty quickly…

      I’m not sure with the transition layer… Isn’t there things like qemu and box64… And multiarch support is part of most of the Linux distributions as of today? I always thought it’s just a few commands to make your system execute foreign binaries. I mean I’ve only ever tried cross-compiling for arm and running 32bit games on amd64 architecture so I don’t know that much. In the end I don’t use that much proprietary software, so it’s not really any issue for me. >99% of Linux software I use is available for ARM. But I can see how that’d be an issue for a gamer, regardless of the operating system being Windows or Linux or MacOS.

      And I’m not really interested in the AI coprocessor itself. The real question for me is: Can it do LLM inference as fast as a M2/M3 Macbook? For that it’d need RAM that’s connected via a wide bus. And then there’s the question what does a machine with 64GB of RAM cost. That’s the major drawback with a Macbook because they get super expensive if you want a decent amount of RAM.

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        Isn’t there things like qemu and box64…

        Yeah, but they’re experimental and probably very buggy. I’ve used box64 on my phone, it doesn’t play well with everything.

        Can it do LLM inference as fast as a M2/M3 Macbook?

        It should be better at AI stuff than M series laptops, allegedly. Many manufacturers actually started listing their prices for the new laptops, the new Microsoft ones start at 16GB of RAM at $1000. I know the Lenovo one can reach 64GB of RAM but not sure about the pricing.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          6 months ago

          By the time Snapdragon X Elite devices are broadly available you probably have to compare them against the M4. Apple specifies the M4’s NPU with 38 TOPS while Qualcomm specifies the Snapdragon X Elite with 45 TOPS, but I wouldn’t bet on these numbers being directly comparable (just like TFLOPS from different GPU manufacturers).

          The M4 also made quite a big jump in single core performance and multi-core performance seems to be comparable to what the X Elite can achieve unless we’re talking about its 80 watts mode, but then we’d have to take Apple’s “Pro” and “Max” chips into account. Keep in mind current M4 performance metrics stem from a 5mm thick, passively cooled device. It will be interesting to see whether Qualcomm releases bigger chips on this architecture.

          Price is obviously where the X Elite could shine as there’ll be plenty of devices to choose from (once they’re actually broadly available) and if you need anything above base models (which mostly start at 8 GB RAM and 256 GB SSD at Apple) you’ll likely pay a lot less for upgrades compared to the absolutely ridiculous upgrade pricing from Apple. Price to performance might be very good here.

          If and when Linux distributions start seamlessly supporting x86 apps on ARM I’ll be interested in a thin and light ARM device if it really turns out to be that much more energy efficient compared to x86 chips. Most comparisons use Intel as a reference for x86 efficiency, but AMD has a decent lead here and I feel like it’s not as far off of ARM chips as the marketing makes it seem, so for the time being I think going with something like an AMD Ryzen 7840U/8840U is the way to go for the broadest Windows/Linux compatibility while achieving decent efficiency.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          6 months ago

          Hmmh. I can’t really make an informed statement. I can’t fathom qemu being experimental. That’s like a 20 year old project and used by lots of people. I’m not sure. And I’ve yet to try Box64.

          I looked it up. The Snapdragin X Elite “Supports up to 64GB LPDDR5, with 136 GB/s memory bandwidth” while the Apple M2/M3 have anywhere from 100 GB/s memory bandwith to 150/300 or 400. (800 in the Ultra). And a graphics card has like ~300 to ~1000GB/s)

          (Of course that’s only relevant for running large language models.)