It’s actually running light on the checkpoints and loras for some of my gens. But heavy on the style keywords. I think my record is 23(?) different resources.
The software I use is called Stable Diffusion, it’s a Python based FOSS AI image generation software. It uses a local web interface to interact with the program, the one I use is called automatic1111 or a1111 for short. A1111 can be loaded with different extensions that add pre and post-processing tools. You can also access free community made resources and make online gens at civitai.com, it’s free to start an account and if you want I can give you a referral link. From there you download gigabytes worth of base models and training data packages called ‘lora’ which you can load into Stable Diffusion. You can find different training data to make it easier to generate the specific things you want. From there you choose what resources you want to use, build your prompts for what you want, build a negative prompt to remove the things you don’t want, and play with the various configuration settings in Stable Diffusion while generating images testing to get the intended image.
This is what my interface looks like:
And each of those tabs across the top are different tools for pre-and post processing.
There are also Automatic1111 extensions to set up batch jobs with a variety of prompts.
Also, if you launch Automatic1111 with the --api option, external software can also do automated requests of it. I recently noticed that an adult game had has someone generate an image pack of something like 10k different body permutations from prompt permutations.
It’s actually running light on the checkpoints and loras for some of my gens. But heavy on the style keywords. I think my record is 23(?) different resources.
How on earth do you generate this? Do you use a tool or are these manually put in?
The software I use is called Stable Diffusion, it’s a Python based FOSS AI image generation software. It uses a local web interface to interact with the program, the one I use is called automatic1111 or a1111 for short. A1111 can be loaded with different extensions that add pre and post-processing tools. You can also access free community made resources and make online gens at civitai.com, it’s free to start an account and if you want I can give you a referral link. From there you download gigabytes worth of base models and training data packages called ‘lora’ which you can load into Stable Diffusion. You can find different training data to make it easier to generate the specific things you want. From there you choose what resources you want to use, build your prompts for what you want, build a negative prompt to remove the things you don’t want, and play with the various configuration settings in Stable Diffusion while generating images testing to get the intended image.
This is what my interface looks like:
And each of those tabs across the top are different tools for pre-and post processing.
That would be awesome. Thanks for this!
There are also Automatic1111 extensions to set up batch jobs with a variety of prompts.
Also, if you launch Automatic1111 with the --api option, external software can also do automated requests of it. I recently noticed that an adult game had has someone generate an image pack of something like 10k different body permutations from prompt permutations.
That’s really cool to know, thanks! You can cover a lot of ground using tools like that, probably excellent for creating training data.