• poVoq
    link
    fedilink
    English
    69 months ago

    This is odd reporting: Stable Diffuse XL AFAIK already runs on a GPU with 8GB ram and usually doesn’t need that much time to generate an image either (depends on the GPU).

    • @[email protected]OP
      link
      fedilink
      49 months ago

      I think they got their numbers wrong. It says they shrink it down to 700 million parameters, that would make it smaller than SD 1.5, which means it should take way less than 8GB of RAM.

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        I’m guessing there’s a mix. The smallest version is 700 million, possibly the one used to generate the time data reported, but the largest (or not?) still runs with 8gb. If I remember correctly SD3 is supposed to have multiple versions, starting from 800 millions and going up, so this is going to be interesting.

  • RBG
    link
    fedilink
    English
    59 months ago

    Is that feasible on a Raspberry pi?

    • Scew
      link
      English
      3
      edit-2
      9 months ago

      No, lol. Well, at least I’m not 100% familiar with Pis new offerings, but idk about their PCI-E capabilities. Direct quote:

      The tool can run on low-cost graphics processing units (GPUs) and needs roughly 8GB of RAM to process requests — versus larger models, which need high-end industrial GPUs.

      Makes your question seem silly trying to imagine hooking up my GPU which is probably bigger than a Pi to a Pi.

      Have been running all the image generation models on a 2060 super (8GB VRAM) up to this point including SD-XL, the model they “distilled” theirs from… Not really sure what exactly they think they are differentiating themselves from, reading the article…

    • @Wooki
      link
      English
      0
      edit-2
      9 months ago

      Lol read the article, it cites “8gb vram” and if i had to guess it will only support nvidia out of the gate