As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • @[email protected]
    link
    fedilink
    English
    31 year ago

    Generative art is actually much easier to run than LLMs. You can get really high resolutions on SDXL (1024x1024) using only 8gb of Vram (although it’d be slow). There’s no way you can get anything but the smallest of text generative models into that same amount of VRAM.

    • @pexavc
      link
      English
      1
      edit-2
      1 year ago

      Oh wow, that’s good to know. I always attributed visual graphics to be way more intensive. Wouldn’t think a text generative model to take up that much Vram

      Edit: how many parameters did you test with?

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Sorry, just seeing this now- I think with 24gb of vram, the most you can get is a 4bit quantized 30b model, and even then, I think you’d have to limit it to 2-3k of context. Here’s a chart for size comparisons: https://postimg.cc/4mxcM3kX

        By comparison, with 24gb of vram, I only use half of that to create a batch of 8 768x576 photos. I also sub to mage.space, and I’m pretty sure they’re able to handle all of their volume on an A100 and A10G