Fair warning: I use a lot of explicit loras and prompts in combination with large sets of style keywords. I get results using tools you wouldn’t expect I guess. Just don’t be shocked by anything you see. For example in these I use an amateur pornographic checkpoint, that if I remove, the images become more risqué. I also use a local SD install so I have significantly fewer limitations than online generators.
spoiler
Prompt: hologram of3d fluffy ethereal fantasy concept art of neonpunk style long shot scenic professional photograph of cinematic photo breathtaking Masterpiece, maximum fidelity, ultra high detail, ultra HD, Panavision Millennium DXL2 8k photograph ofa cinematic ethereal fantasy concept of breathtaking masterpiece, epic sexual fantasy, exquisitely explicit, Golden metal, superior breasts, (cheeky grin:1.1), regal, demure, (barbarian queen:1.2), style of andre yidim emauromin style Glass r/LegalTeens r/adorableporn r/xsmallgirls . award-winning, professional, highly detailed . 35mm photograph, film, bokeh, professional, 4k, highly detailed, perfect viewpoint, highly detailed, wide-angle lens, hyper realistic, with dramatic sky, polarizing filter, natural lighting, vivid colors, everything in sharp focus, HDR, UHD, 64K . cyberpunk, vaporwave, neon, vibes, vibrant, stunningly beautiful, crisp, detailed, sleek, ultramodern, magenta highlights, dark purple shadows, high contrast, cinematic, ultra detailed, intricate, professional . magnificent, celestial, ethereal, painterly, epic, majestic, magical, fantasy art, cover art, dreamy, closeup cute and adorable, cute big circular reflective eyes, long fuzzy fur, Pixar render, unreal engine cinematic smooth, intricate detail, cinematic floating inspace, a vibrant digital illustration, dribbble, quantum wavetracing, black background, behance hd
Neg: sfw, lip-teeth, skin, film grain, excess noise amateur, (blurry:1.3), (script, inscription, logo, watermark, signature, title, attribution, text, words:1.5), cropped, out of frame, (worst quality, low quality, poor quality), jpeg artifacts, poorly lit, overexposed, underexposed, glitch, error, out of focus, soft textures, low resolution, low fidelity, (ghibli grain,semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, digital art, anime, manga:1.2), grayscale, monochrome, ugly, deformed, noisy, blurry, distorted, grainy, drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly, painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly, disfigured, photographic, realistic, realism, 35mm film, dslr, cropped, frame, text, deformed, glitch, noise, noisy, off-center, deformed, cross-eyed, closed eyes, bad anatomy, ugly, disfigured, sloppy, duplicate, mutated, black and white
Params: Steps:55,Sampler:DPM++2MKarras,CFG scale:6.5,Seed:191769966,Size:512x768,Model hash:1fe6c7ec54,Model:juggernautXL_version6Rundiffusion,Denoising strength:0.25,Clip skip:2,ADetailer model:face_yolov8n.pt,ADetailer confidence:0.3,ADetailer dilate erode:4,ADetailer mask blur:4,ADetailer denoising strength:0.25,ADetailer inpaint only masked:True,ADetailer inpaint padding:32,ADetailer ControlNet model:control_v11p_sd15_inpaint [ebff9138],ADetailer ControlNet module:inpaint_global_harmonious,ADetailer model 2nd:Eyes.pt,ADetailer confidence 2nd:0.3,ADetailer dilate erode 2nd:4,ADetailer mask blur 2nd:4,ADetailer denoising strength 2nd:0.25,ADetailer inpaint only masked 2nd:True,ADetailer inpaint padding 2nd:32,ADetailer ControlNet model 2nd:control_v11p_sd15_inpaint [ebff9138],ADetailer ControlNet module 2nd:inpaint_global_harmonious,ADetailer version:23.11.0,ControlNet 0:"Module: inpaint_global_harmonious, Model: control_v11p_sd15_inpaint [ebff9138], Weight: 1.0, Resize Mode: ResizeMode.INNER_FIT, Low Vram: False, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: True, Control Mode: ControlMode.BALANCED, Save Detected Map: True",Hires upscale:2,Hires steps:16,Hires upscaler:8x_NMKD-Superscale_150000_G,Lora hashes:"ahx_v1: 13e93b53e403, age_slider_v2: ccfb7be24491, add_detail: 7c6bad76eb54, perky_breasts1: 429abb241784, eddiemauroLora2: 73a91f50a4fe, epi_noiseoffset2: d1131f7207d6, Glass: 34325be36a2a, breastsizeslideroffset: ca4f2f9fba92",Version:v1.6.0-2-g4afaaf8a,Hashes: {"lora:ahx_v1":"47fb408904", "lora:age_slider_v2":"172eaca6ac", "lora:add_detail":"47aaaf0d29", "lora:perky_breasts1":"703af972dd", "lora:epi_noiseoffset2":"81680c064e", "lora:Glass":"b0838dc7bb", "lora:pornmasterAmateur_fullV6-inpainting":"b020e27c10", "model":"1fe6c7ec54"}
It’s actually running light on the checkpoints and loras for some of my gens. But heavy on the style keywords. I think my record is 23(?) different resources.
There are also Automatic1111 extensions to set up batch jobs with a variety of prompts.
Also, if you launch Automatic1111 with the --api option, external software can also do automated requests of it. I recently noticed that an adult game had has someone generate an image pack of something like 10k different body permutations from prompt permutations.
The software I use is called Stable Diffusion, it’s a Python based FOSS AI image generation software. It uses a local web interface to interact with the program, the one I use is called automatic1111 or a1111 for short. A1111 can be loaded with different extensions that add pre and post-processing tools. You can also access free community made resources and make online gens at civitai.com, it’s free to start an account and if you want I can give you a referral link. From there you download gigabytes worth of base models and training data packages called ‘lora’ which you can load into Stable Diffusion. You can find different training data to make it easier to generate the specific things you want. From there you choose what resources you want to use, build your prompts for what you want, build a negative prompt to remove the things you don’t want, and play with the various configuration settings in Stable Diffusion while generating images testing to get the intended image.
This is what my interface looks like:
And each of those tabs across the top are different tools for pre-and post processing.
These look great, I love them!
What kind of prompt/model did you use?
Appreciated!
Fair warning: I use a lot of explicit loras and prompts in combination with large sets of style keywords. I get results using tools you wouldn’t expect I guess. Just don’t be shocked by anything you see. For example in these I use an amateur pornographic checkpoint, that if I remove, the images become more risqué. I also use a local SD install so I have significantly fewer limitations than online generators.
spoiler
Prompt:
hologram of 3d fluffy ethereal fantasy concept art of neonpunk style long shot scenic professional photograph of cinematic photo breathtaking Masterpiece, maximum fidelity, ultra high detail, ultra HD, Panavision Millennium DXL2 8k photograph of a cinematic ethereal fantasy concept of breathtaking masterpiece, epic sexual fantasy, exquisitely explicit, Golden metal, superior breasts, (cheeky grin:1.1), regal, demure, (barbarian queen:1.2), style of andre yidim emauromin style Glass r/LegalTeens r/adorableporn r/xsmallgirls . award-winning, professional, highly detailed . 35mm photograph, film, bokeh, professional, 4k, highly detailed, perfect viewpoint, highly detailed, wide-angle lens, hyper realistic, with dramatic sky, polarizing filter, natural lighting, vivid colors, everything in sharp focus, HDR, UHD, 64K . cyberpunk, vaporwave, neon, vibes, vibrant, stunningly beautiful, crisp, detailed, sleek, ultramodern, magenta highlights, dark purple shadows, high contrast, cinematic, ultra detailed, intricate, professional . magnificent, celestial, ethereal, painterly, epic, majestic, magical, fantasy art, cover art, dreamy, closeup cute and adorable, cute big circular reflective eyes, long fuzzy fur, Pixar render, unreal engine cinematic smooth, intricate detail, cinematic floating in space, a vibrant digital illustration, dribbble, quantum wavetracing, black background, behance hd
Neg:
sfw, lip-teeth, skin, film grain, excess noise amateur, (blurry:1.3), (script, inscription, logo, watermark, signature, title, attribution, text, words:1.5), cropped, out of frame, (worst quality, low quality, poor quality), jpeg artifacts, poorly lit, overexposed, underexposed, glitch, error, out of focus, soft textures, low resolution, low fidelity, (ghibli grain,semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, digital art, anime, manga:1.2), grayscale, monochrome, ugly, deformed, noisy, blurry, distorted, grainy, drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly, painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly, disfigured, photographic, realistic, realism, 35mm film, dslr, cropped, frame, text, deformed, glitch, noise, noisy, off-center, deformed, cross-eyed, closed eyes, bad anatomy, ugly, disfigured, sloppy, duplicate, mutated, black and white
Params:
Steps: 55, Sampler: DPM++ 2M Karras, CFG scale: 6.5, Seed: 191769966, Size: 512x768, Model hash: 1fe6c7ec54, Model: juggernautXL_version6Rundiffusion, Denoising strength: 0.25, Clip skip: 2, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.25, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer ControlNet model: control_v11p_sd15_inpaint [ebff9138], ADetailer ControlNet module: inpaint_global_harmonious, ADetailer model 2nd: Eyes.pt, ADetailer confidence 2nd: 0.3, ADetailer dilate erode 2nd: 4, ADetailer mask blur 2nd: 4, ADetailer denoising strength 2nd: 0.25, ADetailer inpaint only masked 2nd: True, ADetailer inpaint padding 2nd: 32, ADetailer ControlNet model 2nd: control_v11p_sd15_inpaint [ebff9138], ADetailer ControlNet module 2nd: inpaint_global_harmonious, ADetailer version: 23.11.0, ControlNet 0: "Module: inpaint_global_harmonious, Model: control_v11p_sd15_inpaint [ebff9138], Weight: 1.0, Resize Mode: ResizeMode.INNER_FIT, Low Vram: False, Guidance Start: 0.0, Guidance End: 1.0, Pixel Perfect: True, Control Mode: ControlMode.BALANCED, Save Detected Map: True", Hires upscale: 2, Hires steps: 16, Hires upscaler: 8x_NMKD-Superscale_150000_G, Lora hashes: "ahx_v1: 13e93b53e403, age_slider_v2: ccfb7be24491, add_detail: 7c6bad76eb54, perky_breasts1: 429abb241784, eddiemauroLora2: 73a91f50a4fe, epi_noiseoffset2: d1131f7207d6, Glass: 34325be36a2a, breastsizeslideroffset: ca4f2f9fba92", Version: v1.6.0-2-g4afaaf8a, Hashes: {"lora:ahx_v1": "47fb408904", "lora:age_slider_v2": "172eaca6ac", "lora:add_detail": "47aaaf0d29", "lora:perky_breasts1": "703af972dd", "lora:epi_noiseoffset2": "81680c064e", "lora:Glass": "b0838dc7bb", "lora:pornmasterAmateur_fullV6-inpainting": "b020e27c10", "model": "1fe6c7ec54"}
Holy fuck was this one prompt???
It’s actually running light on the checkpoints and loras for some of my gens. But heavy on the style keywords. I think my record is 23(?) different resources.
There are also Automatic1111 extensions to set up batch jobs with a variety of prompts.
Also, if you launch Automatic1111 with the --api option, external software can also do automated requests of it. I recently noticed that an adult game had has someone generate an image pack of something like 10k different body permutations from prompt permutations.
That’s really cool to know, thanks! You can cover a lot of ground using tools like that, probably excellent for creating training data.
How on earth do you generate this? Do you use a tool or are these manually put in?
The software I use is called Stable Diffusion, it’s a Python based FOSS AI image generation software. It uses a local web interface to interact with the program, the one I use is called automatic1111 or a1111 for short. A1111 can be loaded with different extensions that add pre and post-processing tools. You can also access free community made resources and make online gens at civitai.com, it’s free to start an account and if you want I can give you a referral link. From there you download gigabytes worth of base models and training data packages called ‘lora’ which you can load into Stable Diffusion. You can find different training data to make it easier to generate the specific things you want. From there you choose what resources you want to use, build your prompts for what you want, build a negative prompt to remove the things you don’t want, and play with the various configuration settings in Stable Diffusion while generating images testing to get the intended image.
This is what my interface looks like:
And each of those tabs across the top are different tools for pre-and post processing.
That would be awesome. Thanks for this!
Oh wow… I did not expect that…
How long did you work on these?
Maybe an hour of playing with that base model and my loras to hit the style, the rest took all of 20 mins.