I have used several different generators. What they all seem to have in common is that they don’t always display what I am asking for. Example: if I am looking for a person in jeans and t-shirt, I will get images of a person wear things totally different clothing and it isn’t consistent. Another example is if I want a full body picture, that command seems to be ignored giving just waist up or just below the waist. Same goes if I ask for side views or back views. Sometimes they work. Sometimes they don’t. More often they don’t. I have also seen that none of the negative requests seem to actually work. If I ask for pictures of people and don’t want them using cell phones or no tattoos, like magic they have cell phones. Some have tattoos. I have noticed this in every single generator I have used. Am I asking for things the wrong way or is the AI doing whatever it wants and not paying attention to my actual request?

Thanks

  • vibinya
    link
    English
    166 months ago

    My favorite has been locally hosting Automatic1111’s UI. The setup process was super easy and you can get great checkpoints and models on Civitai. This gives me complete control over the models and the generation process. I think it’s an expectation thing as well. Learning how to write the correct prompt, adjust the right settings for the loaded checkpoint, and running enough iterations to get what you’re looking for can take a bit of patience and time. It may be worth learning how the AI actually ‘draws’ things to adjust how you’re interacting with it and writing prompts. There’s actually A LOT of control you gain by locally hosting - controlNet, LORA, checkpoint merging, etc. Definitely look up guides on prompt writing and learn about weights, order, and how negative prompts actually influence generation.

    • EdgeRunner
      link
      fedilink
      English
      26 months ago

      Ive started with stablediffusion_webui, i feel you !!