Do you keep it simple? Just long enough? Go wild with it? How about embeddings, do you also use them?

The more I learn about this, the more I don’t understand it. Outside of some basic enhancers (masterpiece, best quality, worst quality, and bad anatomy/hands etc. if I’m generating a human), I don’t see any big improvements. Every combination gives different result; some look better, some look worse depending on the seed, sampler, etc. It’s basically a matter of taste. Note that I only do illustrations/paintings so the differences might not be much. Do you keep tweaking your prompts or just settle with the prompts you’ve been using?

  • @DrakeRichards
    link
    English
    51 year ago

    I don’t bother with prompt enhancers any more. Stable Diffusion isn’t MidJourney; quantity is far more important than quality. I just prompt for what I want and add negative prompts for things that show up that I don’t want. I’ll use textual inversions like badhandv4 if the details look really bad. If the model isn’t understanding at all then I’ll use ControlNet.

    • @lemmywink
      link
      English
      5
      edit-2
      1 year ago

      Agreed, although too much quantity seemed to water down results quite a bit. Too many and i have to up the weights of nearly everything to 1.2-1.4, otherwise aspects I want to show start to drop off.

      Anecdotally I found the best length to be about 75 positive tokens, though I’d recommend to never go over the 150 token limit if you can help it.

      I have a canned negative prompt list that I use that is super long though, easily 200 tokens. Just a hodge podge of some of the things you listed: bad_anatomy and missing_limbs and missing_hands for example are crucial to have. Adding ugly with a weight over 0.8 has strange results, too I’ve found. Hope that helps!

      • @DrakeRichards
        link
        English
        51 year ago

        I meant the quantity of generated images, not the number of tokens. I rarely go over 50 tokens now. As you said, too many tokens and things start to interact in really odd ways. That’s why I’m not a fan of massive lists of negative tokens either; they are much more efficient as a textual inversion like badhandv4 or Easynegative.

        However, I only use txt2img to get the rough composition of an image; most of my work is done in inpainting afterwards. If you’re looking to have good images just from txt2img then sometimes lots of tokens are necessary.

        Just like traditional art though, this is all based on individual style. It’s important to use what works best for you.

    • akai
      link
      fedilink
      11 year ago

      Do you have any good beginner’s guide for ControlNet?