What are you independently running? What are the limitations of your setup in practice, and what level of hardware is really needed in your opinion?

I’m disabled and need at laptop, - for reasons. Looking seriously at System76 options under $3k, so 6-8gbv RTX 4050 or 4060. I want to integrate SD into a Blender CAD workflow. Is this practical?

  • @Zarxrax
    link
    English
    21 year ago

    I’m not very familiar with laptop cards, but I can tell you that you want an Nvidia RTX card and try to get at least 8gb vram, if not more.

    • @j4k3OP
      link
      English
      11 year ago

      You mind sharing like a ballpark of what this means in practice?

      Like in FreeCAD, if I design an assembly like an entire 3d printer, I’m at the limits of what a 2.6ghz machine can practically work with. Editing a part tree takes 5+ minutes to change anything, so a dozen small changes is getting to the point where it is impractical. I can’t currently design a more complex assembly if I wanted to. Is there an analogous type of example with SD?

      What is a real world limitation with 8gb, just scaling, or is there some kind of iterative workflow step where it just isn’t practical?

      • @Zarxrax
        link
        English
        21 year ago

        So next month, sdxl is supposed to be released for stable diffusion. This is essentially a new generation of stable diffusion models which are higher resolution and promise to be much better than what is currently used. It has been announced that the minimum requirements for the sdxl models is 8gb ram.

        Ai stuff is just generally very heavy on vram use. The speed of the card will determine how fast it can generate or process things, but vram is needed for this stuff to even work at all.

  • poVoq
    link
    fedilink
    English
    21 year ago

    I would be interested if anyone here has experience with running SD on an Intel Arc A770 16gb under Linux. Seems like the most affordable card with open drivers right now.

    • @j4k3OP
      link
      English
      11 year ago

      I think the big thing with SD on nvidia is the CUDA API. I’ve seen posts and articles on the efforts of AMD to push their open source ROCm API as a replacement for proprietary nvidia. I thought I had commented on a post here with an article summary of a white paper showing at the ultra high end, like $10k video cards, AMD is already performing at 80% compared to nvidia, but am having trouble finding that one. I haven’t heard much about what Intel is doing in this space. I think it all comes down to this API and how much it is adopted.

      • poVoq
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        AMD doesn’t sell any suitable consumer cards for SD though and ROCm support is limited to their datacenter GPUs. It can be made to work with consumer GPUs but it is really hit or miss and they drop compatibility with cards barely a few years old.

        Intel is working on an open-source CUDA equivalent called oneAPI that works with all major vendors and from what I have heard it is already working better than ROCm for many things.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    I know it’s not Linux, but if you’re looking for a Unix system, I’ve had good experience running SD on a M2 Max MacBook pro. It does ~1-2 iterations per second at 512 pixels.