I was using stable diffusion a lot previously, but haven’t really touched it in the past several months. I was wondering what interfaces people are using these days?

Automatic1111 still seems to be popular, and that’s the one I am most familiar with. I know there are some others now though, like comfy, and I guess maybe invokeAI is still going?

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    Automatic1111’s Stable Diffusion WebUI is hard to give up, with how many features it has that are missing in other frontends. I use anapnoe’s fork for a slightly better UI. I would use vladmandic’s fork, but some of the changes have caused issues for my particular setup.

  • @2dollarsim
    link
    English
    71 year ago

    Automatic1111 is the second one I tried and I never left. It’s the best.

  • @Swexti
    link
    English
    4
    edit-2
    1 year ago

    Is no-one here running ComfyUI? It’s one of my favorite UI’s as it’s completely node based and extensive! Has everything auto1111 has and even more! EDIT: It’s not really everything, but almost!

    • KiranWells
      link
      fedilink
      English
      11 year ago

      I’m also using ComfyUI. It just has the ability to do so much more than something like Automatic1111, even if it is missing a couple of features. For example, I have several workflows that do incremental changes to a photo, changing the prompt halfway through the generation, or even upscaling halfway through the generation.

      I can’t really imagine going back, unless there is some killer feature that Comfy is missing.

  • @the_ramzay
    link
    English
    21 year ago

    Draw Things and InvokeAI on my MacBook Pro m1 pro

  • IntheTreetop
    link
    fedilink
    English
    11 year ago

    I’m stuck with an AMD card for other purposes so I pretty much have to use the DirectML fork which is okay, but it’s very slow and despite having 12gb of video ram, I still get the out of ram messages all the time. But, hopefully some progress will be made somewhat soon on those cards.

    But it is fun, that’s for sure.

  • Pennomi
    link
    English
    11 year ago

    InvokeAI has a very beautiful unified inpainting and outpainting interface with an infinite canvas.

  • danielbln
    link
    English
    11 year ago

    Auto1111 on RunDiffusion.

  • radialmonster
    link
    fedilink
    11 year ago

    i used to use invoke ai,. invoke ai just realeased version 3 beta. i will wait for a few more betas or rc for 3 to use it again.
    These days I usually use Makeayo

  • @[email protected]
    link
    fedilink
    English
    -2
    edit-2
    1 year ago

    The porn industry is doomed!

    Edit: well I think the title of this post changed or my comment ended up on the wrong post so my comment is irrelevant lol

    • @2dollarsim
      link
      English
      11 year ago

      Not for videos, we are still quite a way from that yet.

        • @2dollarsim
          link
          English
          11 year ago

          Haha it won’t be a joke next week when the new text-to-video model comes out

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            With how unstable (lol) txt2img is, I don’t believe a stable enough for porn txt2video model is coming soon.

            • @2dollarsim
              link
              English
              11 year ago

              I would agree, but the rate of innovation in AI is so unpredictable that it could go either way.

              • @[email protected]
                link
                fedilink
                English
                11 year ago

                I don’t really agree.

                Recent AI inovations are pretty modest and use the innovation of raw fucking power to achieve goals.

                Gpt4 uses 230B parameters, whereas to run a 7B LLM you need 16gb of vram already, and llms are o(n²) in complexity in terms of parameters, I’ll let you do the maths

                Stable diffusion (latent diffusion to be more precise) is about the same, the initial training required billions of teraflop, while it was relatively cheap (100k$), it still rides on modern GPU technology .