• Australis13
    link
    fedilink
    381 day ago

    The big win I see here is the amount of optimisation they achieved by moving from the high-level CUDA to lower-level PTX. This suggests that developing these models going forward can be made a lot more energy-efficient, something I hope can be extended to their execution as well. As it stands currently, “AI” (read: LLMs and image generation models) consumes way too many resources to be sustainable.

    • @KingRandomGuy
      link
      English
      5
      edit-2
      24 hours ago

      What I’m curious to see is how well these types of modifications scale with compute. DeepSeek is restricted to H800s instead of H100s or H200. These are gimped cards to get around export controls, and accordingly they have lower memory bandwidth (~2 vs ~3 TB/s) and most notably, much slower GPU to GPU communication (something like 400 GB/s vs 900 GB/s). The specific reason they used PTX in this application was to help alleviate some of the bottlenecks due to the limited inter-GPU bandwidth, so I wonder if that would still improve performance on H100 and H200 GPUs where bandwidth is much higher.

    • @Dkarma
      link
      English
      31 day ago

      Yeah I’d like to see size comparisons too. The cuda stack is massive.

        • @Knock_Knock_Lemmy_In
          link
          English
          51 day ago

          Google was giving me bad search results about PTX so I just posted am opinion and hoped Cunningham’s Law would work.

      • @mholiv
        link
        English
        161 day ago

        Kind of the opposite actually. PTX is in essence nvidia specific assembly. Just like how arm or x86_64 assembly are tied to arm and x86_64.

        At least with cuda there are efforts like zluda. Cuda is more like objective-c was on the mac. Basicly tied to platform but at least you could write a compiler for another target in theory.

        • @KingRandomGuy
          link
          English
          31 day ago

          IIRC Zluda does support compiling PTX. My understanding is that this is part of why Intel and AMD eventually didn’t want to support it - it’s not a great idea to tie yourself to someone else’s architecture you have no control or license to.

          OTOH, CUDA itself is just a set of APIs and their implementations on NVIDIA GPUs. Other companies can re-implement them. AMD has already done this with HIP.

        • @Knock_Knock_Lemmy_In
          link
          English
          11 day ago

          Ah, I hoped it was cross platform, more like Opencl. Thinking about it, a lower level language would be more platform specific.