• MustrumR
    link
    fedilink
    398 months ago

    I program 2-3 layers above (Tensorflow) and those words reverberate all the way up.

    • Bipta
      link
      fedilink
      198 months ago

      I program and those words reverberate.

      • Pyro
        link
        English
        158 months ago

        I reverberate.

        • Scew
          link
          English
          98 months ago

          be.

    • @stingpie
      link
      78 months ago

      Recently, I’ve just given up trying to use cuda for machine learning. Instead, I’ve been using (relatively) cpu intensive activation functions & architecture to make up the difference. It hasn’t worked, but I can at least consistently inch forward.

  • THCDenton
    link
    378 months ago

    Oh cool I got the wrong nvidia driver installed. Guess I’ll reinstall linux 🙃

  • Uranium3006
    link
    fedilink
    168 months ago

    Some numbnut pushed nvidia driver code with compilation errors and now I have to use an old Kernel until it’s fixed

  • baltakatei
    link
    fedilink
    138 months ago

    Nvidia: I have altered the deal, pray I do not alter it further.

  • @sprack
    link
    88 months ago

    Pretty much the exact reason containerized environments were created.

    • Terrasque
      link
      fedilink
      38 months ago

      Yep, I usually make docker environments for cuda workloads because of these things. Much more reliable

        • @sprack
          link
          18 months ago

          When you hit that config need the next step is light weight VM + pcie passthru.

    • @scrion
      link
      128 months ago

      I started working with CUDA at version 3 (so maybe around 2010?) and it was definitely more than rough around the edges at that time. Nah, honestly, it was a nightmare - I discovered bugs and deviations from the documented behavior on a daily basis. That kept up for a few releases, although I’ll mention that NVIDIA was/is really motivated to push CUDA for general purpose computing and thus the support was top notch - still was in no way pleasant to work with.

      That being said, our previous implementation was using OpenGL and did in fact produce computational results as a byproduct of rendering noise on a lab screen, so there’s that.

    • @Skullgrid
      link
      28 months ago

      I don’t know wtf cuda is, but the sentiment is pretty universal: please just fucking work I want to kill myself

      • @topinambour_rex
        link
        38 months ago

        Cuda turns a gpu in to a very fast cpu for specific operations. It won’t replace the cpu, just assist it.

        Graphics are just maths. Plenty of operations for display the beautiful 3d models with the beautiful lights and shadows and shines.

        Those maths used for display 3d, can be used for calculate other stuffs, like chatgpt’s engine.

  • Presi300
    link
    English
    1
    edit-2
    8 months ago

    Insert JavaScript joke here

    spoiler

    Error: joke is undefined