• @[email protected]
    link
    fedilink
    1218 months ago

    An article about Nvidia in the Linux community? Surely all the comments will be productive and discuss the topic at hand.

    Clueless

  • RandomLegend [He/Him]
    link
    fedilink
    English
    748 months ago

    Too little too late.

    Already sold my 3070 and went for an 7900 XT bcs i got fed up with NVidia being lazy

      • RandomLegend [He/Him]
        link
        fedilink
        English
        108 months ago

        well i dearly miss CUDA as i don’t get ZLUDA to work properly with Stable Diffusion and FSR is sill leagues behind DLSS… but yeah overall i am very happy

    • Random Dent
      link
      fedilink
      English
      138 months ago

      I had to update my laptop about two years ago and decided to go full AMD and it’s been awesome. I’ve been running Wayland as a daily driver the whole time and and I don’t even really notice it anymore.

    • @[email protected]
      link
      fedilink
      English
      78 months ago

      If the day comes I want to upgrade my 3080 I’ll switch to an AMD solution but until then I’ll take any improvement I can get from Nvidia.

    • @[email protected]
      link
      fedilink
      -38 months ago

      I don’t believe Nvidia were the one’s being lazy in this regard, they submitted the merge request for explicit sync quite some time ago now. Wayland devs essentially took their sweet time merging the code.

    • @cbarrick
      link
      English
      428 months ago

      Unfortunately, those of us doing scientific compute don’t have a real alternative.

      ROCm just isn’t as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.

      AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.

      • @TropicalDingdong
        link
        108 months ago

        yep yep and yep.

        and they’ve been eating their lunch so long at this point I’ve given up on that changing.

        The new world stands in cuda and that’s just the way it is. I don’t really want an nVidia, radeon seems far better for price to performance . Except I can justify an nVidia for work.

        I can’t justify a radeon for work.

        • @cbarrick
          link
          English
          118 months ago

          Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn’t going anywhere…

          We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.

          • cuFFT -> vkFFT (this definitely exists)
          • cuBLAS -> vkBLAS (is anyone working on this?)
          • cuDNN -> vkDNN (this definitely doesn’t exist)

          At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn’t be that difficult.

          • DarkenLM
            link
            fedilink
            188 months ago

            wouldn’t be that difficult.

            The amount of times I said that only to be quickly proven wrong by the fundamental forces of existence is the reason that’s going to be written on my tombstone.

          • @TropicalDingdong
            link
            38 months ago

            I think. it’s just path stickiness at this point. CUDA works and then you can ignore it’s existence and do the thing you actually care about. ML in the pre CUDA days was painful. CUDA makes it not painful. Asking people to return to painfully…

            Good luck…

      • @[email protected]
        link
        fedilink
        48 months ago

        I find it eerly odd how amd seems to almost intetionally stay out nvidia’s way in terms of cuda and couple other things. I dont wish to speculate but considering how ai is having a blowout yet AMD is basically not even trying, it feels as if the nvidia ceo beying cousins with amd’s ceo has something to do with it. Maybe i am reading too much into it but there’s something going on. Why would amd leave so much money on the table?

    • @[email protected]
      link
      fedilink
      English
      158 months ago

      Thats great.

      I’d still like my Nvidia card to work so I’m happy about this, and when AMD on Linux eventually starts swapping over to explicit sync, I’ll be happy for those users then too.

        • DumbAceDragon
          link
          fedilink
          English
          38 months ago

          Cool. It should still use it though. If for nothing else than the parallelization improvements it allows.

          If we stuck with the “it works fine so I’m not moving away from it” approach then we’d all still be on x11. Nvidia sucks and they should be more of a team player, but I think they were right to push for explicit sync over implicit. We should’ve been doing this from the beginning on wayland.

  • @[email protected]
    link
    fedilink
    9
    edit-2
    8 months ago

    Now all they need is a complete nvidia-settings application under Wayland that allows for coolbits to be set, and I may be able to use Wayland. For some reason, my RTX 2070S boosts far higher than the already overclocked from factory boost clocks, resulting in random crashing - I have to use GWE to limit boost clocks to OEM specs to prevent crashing.

    Strangely enough, this was never a problem under Windows.

  • lemmyvore
    link
    fedilink
    English
    38 months ago

    It will not though. Explicit sync is not a magic solution, it’s just another way of syncing GPU work. Unlike implicit sync it needs to be implemented by every part of the graphical stack. Just because Nvidia is implementing it will not solve issues with compositors not having it, and graphical libraries not having it, and apps not supporting it, and so on and so forth. It’s a step in the right direction but it won’t fix everything overnight like some people think.

    Also it’s silly that this piece mentions Wayland and Nvidia because (1) Wayland doesn’t implement sync of any kind, they probably meant to say “the Wayland stack” and (2) Nvidia is not the only driver that needs to implement explicit sync.

    • @visor841
      link
      618 months ago

      will not solve issues with compositors not having it

      Many compositors already have patches for explicit sync which should get merged fairly quickly.

      graphical libraries not having it

      Both Vulkan and OpenGL have support for explicit sync

      apps not supporting it

      Apps don’t need to support it, they just need to use Vulkan and OpenGL, and they will handle it.

      Wayland doesn’t implement sync of any kind, they probably meant to say “the Wayland stack”

      Wayland has a protocol specifically for explicit sync, it’s as much a part of Wayland as pretty much anything else that’s part of Wayland.

      Nvidia is not the only driver that needs to implement explicit sync.

      Mesa has already merged explicit sync support.

  • @[email protected]
    link
    fedilink
    English
    -11
    edit-2
    8 months ago

    Doesn’t this mean application developers will have to explicitly sync the graphical state? If that’s the case, then devs will have to write custom code for it to work on NVIDIA, correct? If so, I doubt this will “finally solve” any issues, only finally provide the ability to solve them… explicitly and with a lot of dev work + required awareness.

    How come AMD doesn’t need this?

    P.S Obligatory Fuck NVIDIA

    Anti Commercial AI thingy

    CC BY-NC-SA 4.0 :::___

    • @aksdb
      link
      58 months ago

      Nah, explicit sync is the objectively better model if you want high performance. Android went for explicit sync right from the start and from what I gather also Intel and AMD prefer it. The problem is, that the graphics stacks on Linux have been using implicit sync for ages and so far no one dared to change the status quo. Nvidia was “simply” rejecting implementing an inferior mechanism in their driver. While somewhat understandable, it was still a decision on the back of their users.