It’s a multiple step process on my local machine. I use the vladmatic fork of the automatic 111 stable difussion repo.
I created a depth map from the original image to give the ai some pointers about the character positioning in the picture via controlnet.
There are several models on civit.ai who specialize in photorealism. I used such a model with a fitting prompt and a lot of embeddings in the negative prompt to force the photorealism.
Not long ago you then had to inpaint every face manually in the next step. But now we have the plug in autodetailer which identifies all the faces and automatically creates inpaint masks for every face.
The whole process takes about 10 to 15 minutes.
The evolution process of stable diffusion is quite fascinating. Everyday there is something new and it’s hard to keep track of it.
I tried to make them more likeable and it seems that I got Sam in the Center of attention.
That’s amazing. I don’t recall seeing a use of AI where a drawn image is converted into a real one.
It’s a multiple step process on my local machine. I use the vladmatic fork of the automatic 111 stable difussion repo.
I created a depth map from the original image to give the ai some pointers about the character positioning in the picture via controlnet.
There are several models on civit.ai who specialize in photorealism. I used such a model with a fitting prompt and a lot of embeddings in the negative prompt to force the photorealism.
Not long ago you then had to inpaint every face manually in the next step. But now we have the plug in autodetailer which identifies all the faces and automatically creates inpaint masks for every face.
The whole process takes about 10 to 15 minutes.
The evolution process of stable diffusion is quite fascinating. Everyday there is something new and it’s hard to keep track of it.