And how much do they cost? And how do you like them?

  • @[email protected]
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    I should also mention for completeness that one can “rent” a computer with a large GPU and use it remotely on places like vast.ai, if you just want to dabble a bit. But then you’re also kind of in the position of keeping the GPU idle a fair bit of the time, just as if you had it locally.

    There may be someone running online Stable Diffusion-based services out there, but I haven’t gone looking to get an appraisal of what the state of affairs there is.

    EDIT: I should also note that you can run Stable Diffusion on your CPU. It will be very, very, very slow, and unless you just want to take a look at the UI or something, you are probably going to go bonkers pretty quickly if you try doing any significant work on the CPU. Might work if you just want to occasionally upscale an image – something that it’s pretty good at.

    • w2tpmf
      link
      fedilink
      11 year ago

      What if I have quad 12-core Xeons with 196GB of RAM?

      How slow are we talking? Would a prompt I can run on Mage.space in 3min take my system hours? or days?

      • tal
        link
        fedilink
        1
        edit-2
        1 year ago

        What if I have quad 12-core Xeons with 196GB of RAM?

        I have a 24-core i9-13900 and 128GB of RAM and I briefly tried it and recall it being what I’d call unusably slow. That being said, I also just discovered that my water cooler’s pump has been broken and the poor CPU had been running with zero cooling for the past six months and throttling the bajesus out of itself, so maybe I’d be possible to improve on that a bit.

        If you seriously want to try it, I’d just give it a spin. Won’t cost you more then the time to download and install it, and you’ll know how it performs. And you’ll get to try the UI.

        I just don’t want to give the impression to people that they’re gonna be happy with on-CPU performance and then have them be disappointed, hence the qualifiers.

        EDIT: Here’s a fork designed specifically for the CPU that uses a bunch of other optimizations (like the turbo “do a generation in only a couple iterations” thing, which I understand has some quality tradeoffs) that says that it can get down into practical times for a CPU, just a couple of seconds. It can’t do 1024x1024 images, though.

        https://github.com/rupeshs/fastsdcpu

        I haven’t used it, though. And I don’t think that that “turbo” approach lets you use arbitrary models.